Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Previously on Learning ASP.Net MVC...


Started with the idea of a project that would use user configurable queries to do Google searches, store the information in the results and then perform various data analysis on them and display them based on what the user wants. However, I first implemented a Google authentication and then went to write some theoretical posts. Lastly, I've upgraded the project from .NET Core 1.0 to version 1.1.

Well, it took me a while to get here because I was at a crossroads. I like the idea of dependency injectable services to do data access. At the same time there is the entire Entity Framework tutorial path that kind of wants to strongly integrate EF with my projects. I mean, if I have a service that gives me the list of all items in the database and then I want to get only a few items, it would be bad design to filter the entire list. As such, I would have to write a different method that allows me to get the items based on some kind of filters. On the other hand, Entity Framework code looks just like that "give me all you have, filtered by this" which is then translated into an efficient query to the database. One possibility would be to have my service return IQueryable <T>, so I could also use the system to generate the database code on the fly.

The Design


I've decided on the service architecture, against an EF type IQueryable way, because I want to be able to replace that service with anything, including something that doesn't work with a database or something that doesn't know how to dynamically create queries. Also, the idea that the service methods will describe exactly what I want appeals to me more than avoiding a bit of duplicated code.

Another thing to define now is the method through which I will implement the dependency injection. Being the control freak that I am, I would go with installing my own library, something like SimpleInjector, and configure it myself and use it explicitly. However, ASP.Net Core has dependency injection included out of the box, so I will use that.

As defined, the project needs queries to pass on to Google and a storage service for the results. It needs data services to manage these entities, as well as a service to abstract Google itself. The data gathering operation itself cannot be a simple REST call, since it might take a while, it must be a background task. The data analysis as well. So we need a sort of job manager.

As per a good structured design, the data objects will be stored in a separate project, as well as the interfaces for the services we will be using.

Some code, please!


Well, start with the code of the project so far: GitHub and let's get coding.

Before finding a solution to actually run the background code in the context of ASP.Net, let's write it inside a class. I am going to add a folder called Jobs and add a class in it called QueryProcessor with a method ProcessQueries. The code will be self explanatory, I hope.
public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}

So we get the time - from a service, of course - and request the unprocessed queries for that time, then we extract the content items for each query, which then are updated in the database. The idea here is that, for the first time a query is defined or when the interval from the last time the query was processed, the query will be sent to the content service from which content items will be received. These items will be stored in the database.

Now, I've kept the code as concise as possible: there is no indication yet of any implementation detail and I've written as little code as I need to express my intention. Yet, what are all these services? What is a time service? what is a content service? Where are they defined? In order to enable dependency injection, we will populate all of these fields from the constructor of the query processor. Here is how the class would look in its entirety:
using ContentAggregator.Interfaces;
using System.Linq;

namespace ContentAggregator.Jobs
{
public class QueryProcessor
{
private readonly IContentDataService _contentDataService;
private readonly IContentService _contentService;
private readonly IQueryDataService _queryDataService;
private readonly ITimeService _timeService;

public QueryProcessor(ITimeService timeService, IQueryDataService queryDataService, IContentDataService contentDataService, IContentService contentService)
{
_timeService = timeService;
_queryDataService = queryDataService;
_contentDataService = contentDataService;
_contentService = contentService;
}

public void ProcessQueries()
{
var now = _timeService.Now;
var queries = _queryDataService.GetUnprocessed(now);
var contentItems = queries.AsParallel().WithDegreeOfParallelism(3)
.SelectMany(q => _contentService.Query(q.Text));
_contentDataService.Update(contentItems);
}
}
}

Note that the services are only defined as interfaces which we declare in a separate project called ContentAggregator.Interfaces, referred above in the usings block.

Let's ignore the job processor mechanism for a moment and just run ProcessQueries in a test method in the main controller. For this I will have to make dependency injection work and implement the interfaces. For brevity I will do so in the main project, although it would probably be a good idea to do it in a separate ContentAggregator.Implementations project. But let's not get ahead of ourselves. First make the code work, then arrange it all nice, in the refactoring phase.

Implementing the services


I will create mock services first, in order to test the code as it is, so the following implementations just do as little as possible while still following the interface signature.
public class ContentDataService : IContentDataService
{
private readonly static StringBuilder _sb;

static ContentDataService()
{
_sb = new StringBuilder();
}

public void Update(IEnumerable<ContentItem> contentItems)
{
foreach (var contentItem in contentItems)
{
_sb.AppendLine($"{contentItem.FinalUrl}:{contentItem.Title}");
}
}

public static string Output
{
get { return _sb.ToString(); }
}
}

public class ContentService : IContentService
{
private readonly ITimeService _timeService;

public ContentService(ITimeService timeService)
{
_timeService = timeService;
}

public IEnumerable<ContentItem> Query(string text)
{
yield return
new ContentItem
{
OriginalUrl = "http://original.url",
FinalUrl = "https://final.url",
Title = "Mock Title",
Description = "Mock Description",
CreationTime = _timeService.Now,
Time = new DateTime(2017, 03, 26),
ContentType = "text/html",
Error = null,
Content = "Mock Content"
};
}
}

public class QueryDataService : IQueryDataService
{
public IEnumerable<Query> GetUnprocessed(DateTime now)
{
yield return new Query
{
Text="Some query"
};
}
}

public class TimeService : ITimeService
{
public DateTime Now
{
get
{
return DateTime.UtcNow;
}
}
}

Now all I have to do is declare the binding between interface and implementation. The magic happens in ConfigureServices, in Startup.cs:
services.AddTransient<ITimeService, TimeService>();
services.AddTransient<IContentDataService, ContentDataService>();
services.AddTransient<IContentService, ContentService>();
services.AddTransient<IQueryDataService, QueryDataService>();

They are all transient, meaning that for each request of an implementation the system will just create a new instance. Another popular method is AddSingleton.

Using dependency injection


So, now I have to instantiate my query processor and run ProcessQueries.

One way is to set QueryProcessor as a service. I extract an interface, I add a new binding and then I give an interface as a parameter of my controller constructor:
[Authorize]
public class HomeController : Controller
{
private readonly IQueryProcessor _queryProcessor;

public HomeController(IQueryProcessor queryProcessor)
{
_queryProcessor = queryProcessor;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
_queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
In fact, I don't even have to declare an interface. I can just use services.AddTransient<QueryProcessor>(); in ConfigureServices and it works as a parameter to the controller.

But what if I want to use it directly, resolve it manually, without injecting it in the controller? One can use the injection of a IServiceProvider instead. Here is an example:
[Authorize]
public class HomeController : Controller
{
private readonly IServiceProvider _serviceProvider;

public HomeController(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}

public IActionResult Index()
{
return View();
}

[HttpGet("/test")]
public string Test()
{
var queryProcessor = _serviceProvider.GetService<QueryProcessor>();
queryProcessor.ProcessQueries();
return ContentDataService.Output;
}
}
Yet you still need to use services.Add... in ConfigureServices and inject the service provider in the constructor of the controller.

There is a way of doing it completely separately like this:
var serviceProvider = new ServiceCollection()
.AddTransient<ITimeService, TimeService>()
.AddTransient<IContentDataService, ContentDataService>()
.AddTransient<IContentService, ContentService>()
.AddTransient<IQueryDataService, QueryDataService>()
.AddTransient<QueryProcessor>()
.BuildServiceProvider();
var queryProcessor = serviceProvider.GetService<QueryProcessor>();

This would be the way to encapsulate the ASP.Net Dependency Injection in another object, maybe in a console application, but clearly it would be pointless in our application.

The complete source code after these modifications can be found here. Test the functionality by going to /test on your local server after you start the app.

It is about time to revisit my series on ASP.Net MVC Core. From the time of my last blog post the .Net Core version has changed to 1.1, so just installing the SDK and running the project was not going to work. This post explains how to upgrade a .Net project to the latest version.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Short version


Pressing the batch Update button for NuGet packages corrupted project.json. Here are the steps to successfully migrate a .Net Core project to a higher version.

  1. Download and install the .NET Core 1.1 SDK
  2. Change the version of the SDK in global.json - you can find out the SDK version by creating a new .Net Core project and checking what it uses
  3. Change "netcoreapp1.0" to "netcoreapp1.1" in project.json
  4. Change Microsoft.NETCore.App version from "1.0.0" to "1.1.0" in project.json
  5. Add
    "runtimes": {
    "win10-x64": { }
    },
    to project.json
  6. Go to "Manage NuGet packages for the solution", to the Update tab, and update projects one by one. Do not press the batch Update button for selected packages
  7. Some packages will restore, but remain in the list. Skip them for now
  8. Whenever you see a "downgrade" warning when restoring, go to those packages and restore them next
  9. For packages that tell you to upgrade NuGet, ignore them, it's an error that probably happens because you restore a package while the previous package restoring was not completed
  10. For the remaining packages that just won't update, write down their names, uninstall them and reinstall them

Code after changes can be found on GitHub

That should do it. For detailed steps of what I actually did to get to this concise list, read on.

Long version


Step 0 - I don't care, just load the damn project!


Downloaded the source code from GitHub, loaded the .sln with Visual Studio 2015. Got a nice blocking alert, because this was a .NET Core virgin computer:
Of course, I could have tried to install that version, but I wanted to upgrade to the latest Core.

Step 1 - read the Microsoft documentation


And here I went to Announcing the Fastest ASP.NET Yet, ASP.NET Core 1.1 RTM. I followed the instructions there, made Visual Studio 2015 load my project and automatically restore packages:
  1. Download and install the .NET Core 1.1 SDK
  2. If your application is referencing the .NET Core framework, your should update the references in your project.json file for netcoreapp1.0 or Microsoft.NetCore.App version 1.0 to version 1.1. In the default project.json file for an ASP.NET Core project running on the .NET Core framework, these two updates are located as follows:

    Two places to update project.json to .NET Core 1.1

  3. to be continued...

I got to the second step, but still got the alert...

Step 2 - fumble around


... so I commented out the sdk property in global.json. I got another alert:


This answer recommended uninstalling old versions of SDKs, in my case "Microsoft .NET Core 1.0.1 - SDK 1.0.0 Preview 2-003131 (x64)". Don't worry, it didn't work. More below:

TL;DR; version: do not uninstall the Visual Studio .NET Core Tooling


And then... got the same No executable found matching command "dotnet=projectmodel-server" error again.

I created a new .NET core project, just to see the version of SDK it uses: 1.0.0-preview2-003131 and I added it to global.json and reopened the project. It restored packages and didn't throw any errors! Dude, it even compiled and ran! But now I got a System.ArgumentException: The 'ClientId' option must be provided. Probably it had something to do with the Secret Manager. Follow the steps in the link to store your secrets in the app. It then worked.

Step 1.1 (see what I did there?) - continue to read the Microsoft documentation


The third step in the Microsoft instructions was removed by me because it caused some problems to me. So don't do it, yet. It was
  1. Update your ASP.NET Core packages dependencies to use the new 1.1.0 versions. You can do this by navigating to the NuGet package manager window and inspecting the “Updates” tab for the list of packages that you can update.

    Updating Packages using the NuGet package manager UI with the last pre-release build of ASP.NET Core 1.1


Since I had not upgraded the packages, as in the Microsoft third step, I decided to do it. 26 updates waited for me, so I optimistically selected them all and clicked Update. Of course, errors! One popped up as more interesting: Package 'Microsoft.Extensions.SecretManager.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Another was even more worrisome: Unexpected end of content while loading JObject. Path 'dependencies', line 68, position 0 in project.json. Somehow the updating operation for the packages corrupted project.json! From a 3050 byte file, it now was 1617.

Step 3 - repair what the Microsoft instructions broke


Suspecting it was a problem with the NuGet package manager, I went to the link in the first error. But in Visual Studio 2015 NuGet is included and it was clearly the latest version. So the only solution was to go through each package and see which causes the problem. And I went to 26 packages and pressed Install on each and it worked. Apparently, the batch Update button is causing the issue. Weirdly enough there are two packages that were installed, but remained in the Update tab and also appeared in the Consolidate tab: BundleMinifier.Core and Microsoft.EntityFrameworkCore.Tools, although I can't to anything with them there.

Another package (Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0) caused another confusing error: Package 'Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0' uses features that are not supported by the current version of NuGet. To upgrade NuGet, see http://docs.nuget.org/consume/installing-nuget. Yet restarting Visual Studio led to the disappearance of the CodeGeneration.Tools error.

So I tried to build the project only to be met with yet another project.json corruption error: Can not find runtime target for framework '.NETCoreAPP, Version=v1.0' compatible with one of the target runtimes: 'win10-x64, win81-x64, win8-x64, win7-x64'. Possible causes: [blah blah] The project does not list one of 'win10-x64, win81-x64, win7-x64' in the 'runtimes' [blah blah]. I found the fix here, which was to add
"runtimes": {
"win10-x64": { }
},
to project.json.

It compiled. It worked.

and has 6 comments

Intro


I want to talk today about principles of software engineering. Just like design patterns, they range from useful to YAA (Yet Another Acronym). Usually, there is some guy or group of people who decide that a set of simple ideas might help software developers write better code. This is great! Unfortunately, they immediately feel the need to assign them to mnemonic acronyms that make you wonder if they didn't miss some principles from their sets because they were bad at anagrams.

Some are very simple and not worth exploring too much. DRY comes from Don't Repeat Yourself, which basically means don't write the same stuff in multiple places, or you will have to keep them synchronized at every change. Simply don't repeat yourself. Don't repeat yourself. See, it's at least annoying. KISS comes from Keep It Simple, Silly - yeah, let's be civil about it - anyway, the last letter is there just so that the acronym is actually a word. The principle states that avoiding unnecessary complexity will make your system more robust. A similar principle is YAGNI (You Aren't Gonna Need It - very New Yorkish sounding), which also frowns upon complexity, in particular the kind you introduce in order to solve a possible future problem that you don't have.

If you really want to fill your head with principles for software engineering, take a look at this huge list: List of software development philosophies.

But what I wanted to talk about was SOLID, which is so cool that not only does it sound like something you might want your software project to be, but it's a meta acronym, each letter coming from another acronym:
  • S - SRP
  • O - OCP
  • L - LSP
  • I - ISP
  • D - DIP

OK, I was just making it look harder than it actually is. Each of the (sub)acronyms stands for a principle (hence the last P in each) and even if they have suspiciously sounding names that hint on how much someone wanted to call their principles SOLID, they are really... err... solid. They refer specifically to object oriented programming, but I am sure they apply to all types. Let's take a look:

Single Responsibility


The idea is that each of your classes (or modules, units of code, functions, etc) should strive towards only one functionality. Why? Because if you want to change your code you should first have a good reason, then you should know where is that single (DRY) point where that reason applies. One responsibility equals one and only one reason to change.

Short example:
function getTeam() {
var result=[];
var strongest=null;
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return {
team:result,
strongest:strongest;
};
}

This code iterates through the list of fighters and returns a list of their names. It also finds the strongest fighter and returns that as well. Obviously, it does two different things and you might want to change the code to have two functions that each do only one. But, you will say, this is more efficient! You iterate once and you get two things for the price of one! Fair enough, but let's see what the disadvantages are:
  • You need to know the exact format of the return object - that's not a big deal, but wouldn't you expect to have a team object returned by a getTeam function?
  • Sometimes you might want just the list of fighters, so computing the strongest is superfluous. Similarly, you might only want the strongest player.
  • In order to add stuff in the iteration loop, the code has become more complex - at least when reading it - than it has to be.

Here is how the code could - and should - have looked. First we split it into two functions:
function getTeam() {
var result=[];
this.fighters.forEach(function(fighter) {
result.push(fighter.name);
});
return result;
}

function getStrongestFighter() {
var strongest=null;
this.fighters.forEach(function(fighter) {
if (!strongest||strongest.power<fighter.power) {
strongest=fighter;
}
});
return strongest;
}
Then we refactor it to something simple and readable:
function getTeam() {
return this.fighters
.map(function(fighter) { return fighter.name; });
}

function getStrongestFighter() {
return this.fighters
.reduce(function(strongest,val) {
return !strongest||strongest.power<val.power?val:strongest;
});
}

Open/Closed


When you write your code the last thing you want to do is go back to it and change it again and again whenever you implement a new functionality. You want the old code to work, be tested to work, and allow new functionality to be built on top of it. In object oriented programming that simply means you can extend classes, but practically it also means the entire plumbing necessary to make this work seamlessly. Simple example:
function Fighter() {
this.fight();
}
Fighter.prototype={
fight : function() {
console.log('slap!');
}
};

function ProfessionalFighter() {
Fighter.apply(this);
}
ProfessionalFighter.prototype={
fight : function() {
console.log('punch! kick!');
}
};

function factory(type) {
switch(type) {
case 'f': return new Fighter();
case 'pf': return new ProfessionalFighter();
}
}

OK, this is a silly example, since I am lazily emulating inheritance in a prototype based language such as Javascript. But the result is the same: both constructors will .fight by default, but each of them will have different implementations. Note that while I cannibalized bits of Fighter to build ProfessionalFighter, I didn't have to change it at all. I also built a factory method that returns different objects based on the input. Both possible returns will have a .fight method.

The open/closed principle also applies to non inheritance based languages and even in classic OOP languages. I believe it leads to a natural outcome: preferring composition over inheritance. Write your code so that the various modules just seamlessly connect to each other, like Lego blocks, to build your end product.

Liskov substitution principle


No, L is not coming from any word that they could think of, so they dragged poor Liskov into it. Forget the definition, it's really stupid and complicated. Not KISSy at all! Assume you have the base class Fighter and the more specialized subclass ProfessionalFighter. If you had a piece of code that uses a Fighter object, then this principle says you should be able to replace it with a ProfessionalFighter and the piece of code would still be correct. It may not be what you want, but it would work.

But when does a subclass break this principle? One very ugly antipattern that I have seen is when general types know about their subtypes. Code like "if I am a professional fighter, then I do this" breaks almost all SOLID principles and it is stupid. Another case is when the general class has everything they can think of, either abstract or implemented as empty or throwing an error, then the base classes implement one or the other of the functions. Dude! If your fighter doesn't implement .fight, then it is not a fighter!

I don't want to add code to this, since the principle is simple enough. However, even if it is an idea that primarily applies to OOP it doesn't mean it doesn't have its uses in other types of programming languages. One might say it is not even related to programming languages, but to concepts. It basically says that an orange should behave like an orange if it's blue, otherwise don't call it a blue orange!

Interface segregation principle


This one is simple. It basically is the reverse of the Liskov: split your interfaces - meaning the desired functionality of your code - into pieces that are fully used. If you have a piece of code that uses a ProfessionalFighter, but all it does is use the .fight method, then use a Fighter, instead. Silly example:
public class Fighter {
public virtual void Fight() {
Console.WriteLine("slap!");
}
}

public class EnglishFighter {
public override void Fight() {
Console.WriteLine("box!");
}
public override void Talk() {
Console.WriteLine("Oy!");
}
}

class Program {
public static void Main() {
EnglishFighter f=getMeAFighter();
f.Fight();
}
}

I don't even know if it's valid code, but anyway, the idea here is that there is no reason for me to declare and use the variable f as an EnglishFighter, if all it does is fight. Use a Fighter type. And a YAGNI on you if you thought "wait, but what if I want him to talk later on?".

Dependency inversion principle


Oh, this is a nice one! But it doesn't live in a vacuum. It is related to both SRP and OCP as it states that high level modules should not depend on low level modules, only on their abstractions. In other words, use interfaces instead of implementations wherever possible.

I wrote an entire other post about Inversion of Control, the technique that allows you to properly use and enjoy interfaces, while keeping your modules independent, so I am not going to repeat things here. As a hint, simply replacing Fighter f=new Fighter() with IFighter f=new Fighter() is not enough. You must use a piece of code that decides for you what implementation of an interface will be used, something like IFighter f=GetMeA(typeof(IFighter)).

The principle is related to SRP because it says a boss should not be controlling or depending on what particular things the employees will do. Instead, he should just hire some trustworthy people and let them do their job. If a task depends so much on John that you can't ever fire him, you're in trouble. It is also related to OCP because the boss will behave like a boss no matter what employee changes occur. He may hire more people, replace some, fire some others, the boss does not change. Nor does he accept any responsibility, but that's a whole other principle ;)

Conclusions


I've explored some of the more common acronyms from hell related to software development and stuck more to SOLID, because ... well, you have to admit, calling it SOLID works! Seriously now, following principles such as these (choose your own, based on your own expertise and coding style) will help you a lot later on, when the complexity of your code will explode. In a company there is always that one poor guy who knows everything and can't work well because everybody else is asking him why and how something is like it is. If the code is properly separated on solid principles, you just need to look at it and understand what it does and keep the efficiency of your changes high. One small change for man , like. Let that unsung hero write their code and reach software Valhalla.

SRP keeps modules separated, OCP keeps code free of need for modifications, LSP and ISP make you use the basest of classes or interfaces, reinforcing the separation of modules, while DIP is helping you sharpen the boundaries between modules. Taken together, SOLID principles are obviously about focus, allowing you to work on small, manageable parts of the code without needing knowledge of others or having to make changes to them.

Keep coding!

I am mentally preparing for giving a talk about dependency injection and inversion of control and how are they important, so I intend to clarify my thoughts on the blog first. This has been spurred by seeing how so many talented and even experienced programmers don't really understand the concepts and why they should use them. I also intend to briefly explore these concepts in the context of programming languages other than C#.

And yes, I know I've started an ASP.Net MVC exploration series and stopped midway, and I truly intend to continue it, it's just that this is more urgent.

Head on intro


So, instead of going to the definitions, let me give you some examples, instead.
public class MyClass {
public IEnumerable<string> GetData() {
var provider=new StringDataProvider();
var data=provider.GetStringsNewerThan(DateTime.Now-TimeSpan.FromHours(1));
return data;
}
}
In this piece of code I create a class that has a method that gets some text. That's why I use a StringDataProvider, because I want to be provided with string data. I named my class so that it describes as best as possible what it intends to do, yet that descriptiveness is getting lost up the chain when my method is called just GetData. It is called so because it is the data that I need in the context of MyClass, which may not care, for example, that it is in string format. Maybe MyClass just displays enumerations of objects. Another issue with this is that it hides the date and time parameter that I pass in the method. I am getting string data, but not all of it, just for the last hour. Functionally, this will work fine: task complete, you can move to the next. Yet it has some nagging issues.

Dependency Injection


Let me show you the same piece of code, written with dependency injection in mind:
public class MyClass {
private IDataProvider _dataProvider;
private IDateTimeProvider _dateTimeProvider;

public void MyClass(IDataProvider dataProvider, IDateTimeProvider dateTimeProvider) {
this._dataProvider=dataProvider;
this._dateTimeProvider=dateTimeProvider;
}

public IEnumerable<string> GetData() {
var oneHourBefore=_dateTimeProvider.Now-TimeSpan.FromHours(1);
var data=_dataProvider.GetDataNewerThan(oneHourBefore);
return data;
}
}
A lot more code, but it solves several issues while introducing so many benefits that I wonder why people don't code like this from the get go.

Let's analyse this for a bit. First of all I introduce a constructor to MyClass, one that accepts and caches two parameters. They are not class types, but interfaces, which declare the intention for any class implementing them. The method then does the same thing as in the original example, using the providers it cached. Now, when I write the code of the class I don't actually need to have any provider implementation. I just declare what I need and worry about it later. I also don't need to inject real providers, I can mock them so that I can test my class as standalone. Note that the previous implementation of the class would have returned different data based on the system time and I had no way to control that behavior. The best benefit, for me, is that now the class is really descriptive. It almost reads like English: "Hi, folks, I am a class that needs someone to give me some data and the time of day and I will give you some processed data in return!". The rule of thumb is that for each method, external factors that may influence its behavior must be abstracted away. In our case if the date time provider provides the same time and the data provider the same data, the effect of the method is always the same.

Note that the interface I used was not IStringDataProvider, but IDataProvider. I don't really care, in my class, that the data is a bunch of strings. There is something called the Single Responsibility Principle, which says that a class or a method or some sort of unit of computation should try to only have one responsibility. If you change that code, it should only affect one area. Now, real life is a little different and classes do many things in many directions, yet they can implement any number of interfaces. The interfaces themselves can declare only one responsibility, which is why this is so nice. I don't actually have to have a class that is only a data provider, but in the context of my class, I only need that part and I am clearly declaring my intent in the code.

This here is called dependency injection, which is a fancy expression for saying "my code receives all third party instances as parameters". It is also in line with the Single Responsibility Principle, as now your class doesn't have to carry the responsibility of knowing how to instantiate the classes it needs. It makes the code more modular, easier to test, more legible and more maintainable.

But there is a problem. While before I was using something like new MyClass().GetData(), now I have to push the instantiation of the providers somewhere up the stream and do maybe something like this:
var dataProvider=new StringDataProvider();
var dateTimeProvider=new DateTimeProvider();
var myClass=new MyClass(dataProvider,dateTimeProvider);
myClass.GetData();
The apparent gains were all for naught! I just pushed the same ugly code somewhere else. But here is where Inversion of Control comes in. What if you never need to instantiate anything again? What it you never actually had to write any new Something() code?

Inversion of Control


Inversion of Control actually takes over the responsibility of creating instances from you. With it, you might get this code instead:
public interface IMyClass {
IEnumerable<string> GetData();
}

public class MyClass:IMyClass {
private IDataProvider _dataProvider;
private IDateTimeProvider _dateTimeProvider;

public void MyClass(IDataProvider dataProvider, IDateTimeProvider dateTimeProvider) {
this._dataProvider=dataProvider;
this._dateTimeProvider=dateTimeProvider;
}

public IEnumerable<string> GetData() {
var oneHourBefore=_dateTimeProvider.Now-TimeSpan.FromHours(1);
var data=_dataProvider.GetDataNewerThan(oneHourBefore);
return data;
}
}
Note that I created an interface for MyClass to implement, one that declares my GetData method. Now, to use it, I could write something like this:
var myClass=Dependency.Get<IMyClass>();
myClass.GetData();

Wow! What happened here? I just used a magical class called Dependency that gets me an instance of IMyClass. And I really don't care how it does it. It can discover implementations by itself or maybe I am manually binding interfaces to implementations when the application starts (for example Dependency.Bind<IMyClass,MyClass>();). When it needs to create a new MyClass it automatically sees that it needs two other interfaces as parameters, so it gets implementations for those first and continues up the chain. It is called a dependency chain and the container will go through it all to simply "Get" you what you need. There are many inversion of control frameworks out there, but the concept is so simple that one can make their own easily.

And I get another benefit: if I want to display some other type of data, all I have to do is instruct the dependency container that I want another implementation for the interface. I can even think about versioning: take a class that I know does the job and compare it with a new implementation of the same interface. I can tell it to use different versions based on the client used. And all of this in exactly one place: the dependency container bindings. You may want to plug different implementations provided by third parties and all they have to care about is respecting the contract in your interface.



Solution structure


This way of writing code forces some changes in the structure of your projects. If all you have is written in a single project, you don't care, but if you want to split your work in several libraries, you have to take into account that interfaces need to be referenced by almost everything, including third party modules that you want to plug. That means the interfaces need their own library. Yet in order to declare the interfaces, you need access to all the data objects that their members need, so your Interfaces project needs to reference all the projects with data objects in them. And that means that your logic will be separated from your data objects in order to avoid circular dependencies. The only project that will probably need to go deeper will be the unit and integration test project.

Bottom line: in order to implement this painlessly, you need an Entities library, containing data objects, then an Interfaces library, containing the interfaces you need and, maybe, the dependency container mechanism, if you don't put it in yet another library. All the logic needs to be in other projects. And that brings us to a nice side effect: the only connection between logic modules is done via abstractions like interfaces and simple data containers. You can now substitute one library with another without actually caring about the rest. The unit tests will work just the same, the application will function just the same and functionality can be both encapsulated and programatically described.

There is a drawback to this. Whenever you need to see how some method is implemented and you navigate to definition, you will often reach the interface declaration, which tells you nothing. You then need to find classes that implement the interface or to search for uses of the interface method to find implementations. Even so, I would say that this is an IDE problem, not a dependency injection issue.

Other points of view


Now, the intro above describes what I understand by dependency injection and inversion of control. The official definition of Dependency Injection claims it is a subset of Inversion of Control, not a separate thing.

For example, Martin Fowler says that when he and his fellow software pattern creators thought of it, they called it Inversion of Control, but they decided that it was too broad a term, so they moved to calling it Dependency Injection. That seems strange to me, since I can describe situations where dependencies are injected, or at least passed around, but they are manually instantiated, or situations where the creation of instances is out of the control of the developer, but no dependencies are passed around. He seems to see both as one thing. On the other hand, the pattern where dependencies are injected by constructor, property setters or weird implementation of yet another set of interfaces (which he calls Dependency Injection) is different from Service Locator, where you specifically ask for a type of service.

Wikipedia says that Dependency Injection is a software pattern which implements Inversion of Control to resolve dependencies, while it calls Inversion of Control a design principle (so, not a pattern?) in which custom-written portions of a computer program receive the flow of control from a generic framework. It even goes so far as to say Dependency Injection is a specific type of Inversion of Control. Anyway, the pages there seem to follow the general definitions that Martin Fowler does, which pits Dependency Injection versus Service Locator.

On StackOverflow a very well viewed answer sees dependency injection as "giving an object its instance variables". I tend to agree. I also liked another answer below that said "DI is very much like the classic avoiding of hardcoded constants in the code." It makes one think of a variable as an abstraction for values of a certain type. Same page holds another interesting view: "Dependency Injection and dependency Injection Containers are different things: Dependency Injection is a method for writing better code, a DI Container is a tool to help injecting dependencies. You don't need a container to do dependency injection. However a container can help you."

Another StackOverflow question has tons of answers explaining how Dependency Injection is a particular case of Inversion of Control. They all seem to have read Fowler before answering, though.

A CodeProject article explains how Dependency Injection is just a flavor of Inversion of Control, others being Service Locator, Events, Delegates, etc.

Composition over inheritance, convention over configuration


An interesting side effect of this drastic decoupling of code is that it promotes composition over inheritance. Let's face it: inheritance was supposed to solve all of humanity's problems and it failed. You either have an endless chain of classes inheriting from each other from which you usually use only one or two or you get misguided attempts to allow inheritance from multiple sources which complicates understanding of what does what. Instead interfaces have become more widespread, as declarations of intent, while composition has provided more of what inheritance started off as promising. And what is dependency injection if not a sort of composition? In the intro example we compose a date time provider and a data provider into a time aware data provider, all the time while the actors in this composition need to know nothing else than the contracts each part must abide by. Do that same thing with other implementations and you get a different result. I will go as far as to say that inheritance defines what classes are, while composition defines what classes do, which is what matters in the end.

Another interesting effect is the wider adoption of convention over configuration. For example you can find the default implementation of an interface as the class that implements it and has the same name minus the preceding "I". Rather than explicitly tell the framework that we want to use the Manager class each time someone needs an IManager implementation, it can figure it out for itself by naming alone. This would never work if the responsibility of getting class instances resided with each method using them.

Real life examples


Simple Injector


If you look on the Internet, one of the first dependency injection frameworks you find for .Net is Simple Injector, which works on every flavor of .Net including Mono and Core. It's as easy to use as installing the NuGet package and doing something like this:
// 1. Create a new Simple Injector container
var container = new Container();

// 2. Configure the container (register)
container.Register<IUserRepository, SqlUserRepository>(Lifestyle.Transient);
container.Register<ILogger, MailLogger>(Lifestyle.Singleton);

// 3. Optionally verify the container's configuration.
container.Verify();

// 4. Get the implementation by type
IUserService service = container.GetInstance<IUserService>();

ASP.Net Core


ASP.Net Core has dependency injection built in. You configure your bindings in ConfigureServices:
public void ConfigureServices(IServiceCollection svcs)
{
svcs.AddSingleton(_config);

if (_env.IsDevelopment())
{
svcs.AddTransient<IMailService, LoggingMailService>();
}
else
{
svcs.AddTransient<IMailService, MailService>();
}

svcs.AddDbContext<WilderContext>(ServiceLifetime.Scoped);

// ...
}
then you use any of the registered classes and interfaces as constructor parameters for controllers or even using them as method parameters (see FromServicesAttribute)

Managed Extensibility Framework


MEF is a big beast of a framework, but it can simplify a lot of work you would have to do to glue things together, especially in extensibility scenarios. Typically one would use attributes to declare which interface something "exports" and then use other attributes to "import" implementations in properties and values. All you need to do is put them in the same place. Something like this:
[Export(typeof(ICalculator))]
class SimpleCalculator : ICalculator {
//...
}

class Program {

[Import(typeof(ICalculator))]
public ICalculator calculator;

// do something with calculator
}
Of course, in order for this to work seamlessly you need stuff like this, as well:
private Program()
{
//An aggregate catalog that combines multiple catalogs
var catalog = new AggregateCatalog();
//Adds all the parts found in the same assembly as the Program class
catalog.Catalogs.Add(new AssemblyCatalog(typeof(Program).Assembly));
catalog.Catalogs.Add(new DirectoryCatalog("C:\\Users\\SomeUser\\Documents\\Visual Studio 2010\\Projects\\SimpleCalculator3\\SimpleCalculator3\\Extensions"));


//Create the CompositionContainer with the parts in the catalog
_container = new CompositionContainer(catalog);

//Fill the imports of this object
try
{
this._container.ComposeParts(this);
}
catch (CompositionException compositionException)
{
Console.WriteLine(compositionException.ToString());
}
}

Dependency Injection in other languages


Admit it, C# is great, but it is not by far the most used computer language. That place is reserved, at least for now, for Javascript. Not only is it untyped and dynamic, but Javascript isn't even a class inheritance language. It uses the so called prototype inheritance, which uses an instance of an object attached to a type to provide default values for the instance of said type. I know, it sounds confusing and it is, but what is important is that it has no concept of interfaces or reflection. So while it is trivial to create a dictionary of instances (or functions that create instances) of objects which you could then use to get what you need by using a string key (something like var manager=Dependency.Get('IManager');, for example) it is difficult to imagine how one could go through the entire chain of dependencies to create objects that need other objects.

And yet this is done, by AngularJs, RequireJs or any number of modern Javascript frameworks. The secret? Using regular expressions to determine the parameters needed for a constructor function after turning it to string. It's complicated and beyond the scope of this blog post, but take a look at this StackOverflow question and its answers to understand how it's done.

Let me show you an example from AngularJs:
angular.module('myModule', [])
.directive('directiveName', ['depService', function(depService) {
// ...
}])
In this case the key/type of the service is explicit using an array notation that says "this is the list of parameters that the dependency injector needs to give to the function", but this might be have been written just as the function:
angular.module('myModule', [])
.directive('directiveName', function(depService) {
// ...
})
In this case Angular would use the regular expression approach on the function string.


What about other languages? Java is very much like C# and the concepts there are similar. Even if all are flavors of C, C++ is very different, yet Dependency Injection can be achieved. I am not a C++ developer, so I can't tell you much about that, but take a look at this StackOverflow question and answers; it is claimed that there is no one method, but many that can be used to do dependency injection in C++.

In fact, the only languages I can think of that can't do dependency injection are silly ones like SQL. Since you cannot (reasonably) define your own types or pass functions along, the concept makes no sense. Even so, one can imagine creating dummy stored procedures that other stored procedures would use in order to be tested. There is no reason why you wouldn't use dependency injection if the language allows for it.

Testability


I mentioned briefly unit testing. Dependency Injection works hand in hand with automated testing. Given that the practice creates modules of software that give reproducible results for the same inputs and account for all the inputs, testing becomes a breeze. Let me give you some examples using Moq, a mocking library for .Net:
var dateTimeMock=new Mock<IDateTimeProvider>();
dateTimeMock
.Setup(m=>m.Now)
.Returns(new DateTime(2016,12,03));

var dataMock=new Mock<IDataProvider>();
dataMock
.Setup(m=>m.GetDataNewerThan(It.IsAny<DateTime>()))
.Returns(new[] { "test","data" });

var testClass=new MyClass(dateTimeMock.Object, dataMock.Object);

var result=testClass.GetData();
AssertDeepEqual(result,new[] { "test","data" });

First of all, I take care of all dependencies. I create a "mock" for each of them and I "set up" the methods or property setters/getters that interest me. I don't really need to set up the date time mock for Now, since the data from the data provider is always the same no matter the parameter, but it's there for you to see how it's done. Second, I instantiate the class I want to test using the Object property of my mocks, which returns an object that implements the type given as a generic parameter in the constructor. Third I assert that the side effects of my call are the ones I expect. The mocks need to be as dumb as possible. If you feel you need to write code to define your mocks you are probably doing something wrong.

The type of the tests, for people who are not familiar with this concept, is usually a fully positive one - that is give full valid data and expect the correct result - followed by many negative ones, where the correct data is made incorrect in all possible ways and it is tested that the method fails. If there are many types of combinations of data that would be considered valid, you need a test for as many of them.

Note that the test is instantiating the test class directly, using the constructor. We are not testing the injector here, but the actual class.

Conclusions


What I appreciate most with Dependency Injection is that it forces you to write code that has clear boundaries defined by interfaces. Once this is achieved, you can go write your own stuff and not care about what other people do with theirs. You can test your modules without even caring if the rest of the project even exists. It allows to refactor code in steps and with a lot more confidence since you are covered by unit tests.

While some people work on fire-and-forget projects, like small games or utilities, and they don't care about maintainability, one of the most touted reasons for using unit tests and dependency injection, these practices bring so many other benefits that are almost impossible to get otherwise.

The entire point of this is reducing the complexity of dependencies, which include not only the modules in your application, but also the support frame for them, like people working on them. While some managers might not see the wisdom of reducing friction between software components, surely they can see the positive value of reducing friction between people.

There was one other topic that I wanted to touch, but it is both vast and I have not enough experience with it, however it feels very attractive to me: refactoring old code in order to use dependency injection. Best practices, how to make it safe enough and fast enough to make managers approve it and so on. Perhaps another post later on. I was thinking of a combination of static analysis and automated methods, like replacing all usages of "new" with a single point of instantiation, warning about static methods and properties, automatically replacing known bad practices like DateTime.Now and so on. It might be interesting, right?

I hope I wasn't too confusing and I appreciate any feedback you have. I will be working on a presentation file with similar content, so any help will go into doing a better job explaining it to others.

and has 0 comments
A colleague of mine hit a strange bug today. It so happened that we use a bastardized dependency injection method that takes into account the WCF session before returning an implementation of an interface. In a piece of code the injection failed and we couldn't see why for a while. Let me give you a simplified version:
var someManager=Package.Get<IManager>();
var someDTOs=Cache.GetDatabaseObjects().Select(x=>x.Pack());

public class DataObject {
public string Data {get;set;}
public DataObjectDTO Pack() {
var anotherManager=Package.Get<IAnother>();
return new DataObjectDTO {
Data=anotherManager.Process(Data)
};
}
}

Package.Get will attempt to find a session object and if not it will use another mechanism, but if it finds one, it will only use it if it is not expired or invalid, else throwing an exception. This code failed in the Pack method, when trying to get an instance of IAnother. Please take a few moments to reflect on why (and no, it's not that between calls the session expired).

Show explanation

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services


The previous version of Entity Framework was 6 and the current one is Entity Framework Core 1.0, although for a few years they have been going with Entity Framework 7. It might seem that they changed the naming to be consistent with .NET Core, but according to them they did it to avoid confusion. The new version sprouted from the idea of "EF everywhere", just like .Net Core is ".Net everywhere", and is a rewrite - a port, as they chose to call it, with some extra features but also lacking some of the functionality EF6 had - or better to say has, since they continue to support it for .NET proper. In this post I will examine the history and some of the basic concepts related to working with Entity Framework as opposed to a more direct approach (like opening System.Data.SqlConnection and issuing SqlCommands).

Entity Framework history


Entity Framework started as an ORM, a class of software that abstracts database access. The term itself is either a bit obsolete, with the advent of databases that call themselves non relational, or rebelliously exact, recognizing that anything that can be called a database needs to also encode relationships between data. But that's another topic altogether. When Entity Framework was designed it was all about abstracting SQL into an object oriented framework. How would that work? You would define entities, objects that inherited from a EntityBase class, and decorate their properties with attributes defining some restrictions that databases have, but objects don't, like the size of a field. You also had some default methods that could be overridden in order to control very specific custom requirements. In the background, objects would be mapped to tables, their simple properties to columns and their more complex properties to other tables that had a foreign key relationship with the owner object mapped table.

There were some issues with this system that quickly became apparent. With the data layer separation idea going strong, it was really cumbersome and ugly to move around objects that inherited from an entire hierarchy of Entity Framework classes and held state in ways that were almost opaque to the user. Users demanded the use of POCOs, a way to separate the functionality of EF from the data objects that were used through all the tiers of the application. At the time the solution was mostly to use simple objects within your application and then translate them to data access objects which were entities.

Microsoft also recognized this and in further iterations of EF, they went full POCO. But this enabled them to also move from one way of thinking to another. At the beginning the focus was on the database. You had your database structure and your data access layer and you wanted to add EF to your project, meaning you needed to map existing tables to C# objects. But now, you could go the other way around. You started with an application using plain objects and then just slapped EF on and asked it to create and maintain the database. The first way of thinking was coined "database first" and the other "code first".

In seven iterations of the framework, things have been changed and updated quite a lot. You can imagine that successfully adapting to legacy database structures while seamlessly abstracting changes to that structure and completely managing the mapping of objects to database was no easy. There were ups and downs, but Microsoft stuck with their guns and now they are making the strong argument that all your data manipulation should be done via EF. That's bold and it would be really stupid if Entity Framework weren't a good product they have full confidence in. Which they do. They moved from a framework that was embedded in .NET, to one that was partially embedded and then some extra code was separate and then, with EF6, they went full open source. EF Core is also open source and .NET Core is free of EF specific classes.

Also, EF Core is more friendly towards non relational databases, so you either consider ORM an all encompassing term or EF is no longer just an ORM :)

In order to end this chapter, we also need to discuss alternatives.

Ironically, both the ancestor and the main competitor for Entity Framework was LINQ over SQL. If you don't know what LINQ is, you should take the time to look it up, since it has been an integral part of .NET since version 3.5. in Linq2Sql you would manually map objects to tables, then use the mapping in your code. The management of the database and of the mapping was all you. When EF came along, it was like an improvement over this idea, with the major advantage (or flaw, depending on your political stance) that it handled schema mapping and management for you, as much as possible.

Another system that was and is very used was separating data access based on intent, not on structure. Basically, if you had the need to add/get the names of people from your People table, you would have another project that had some object hierarchy that in the end had methods for AddPeople and GetPeople. You didn't need to delete or update people, you didn't have the API for it. Since the intent was clear, so was the structure and the access to the database, all encapsulated - manually - into this data access layer project. If you wanted to get people by name, for example, you had to add that functionality and code all the intermediary access. This had the advantage (or flaw) that you had someone who was good with databases (and a bit with code) handling the maintenance of the data access layer, basically a database admin with some code writing permissions. Some people love the control over the entire process, while others hate that they need to understand the underlying database in order to access data.

From my perspective, it seems as there is an argument between people who want more control over what is going on and people who want more ease of development. The rest is more an architectural discussion which is irrelevant as EF is concerned. However, it seems to me that the Entity Framework team has worked hard to please both sides of that argument, going for simplicity, but allowing very fine control down the line. It also means that this blog post cannot possibly cover everything about Entity Framework.

Getting started with Entity Framework


So, how do things look in EF Core 1.0? Things are still split down the middle in "code first" and "database first", but code first is the recommended way for starting new projects. Database first is something that must be supported in perpetuity just in case you want to migrate to EF from an existing database.

Database first


Imagine you have tables in an SQL server database. You want to switch to EF so you need to somehow map the existing data to entities. There is a tutorial for that: ASP.NET Core Application to Existing Database (Database First), so I will just quickly go over the essentials.

First thing is to use NuGet to install EF in your project:
Install-Package Microsoft.EntityFrameworkCore.SqlServer
and then add
"Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"
to the project.json tools section. For the Database First approach we also need other stuff like:
Install-Package Microsoft.EntityFrameworkCore.Tools –Pre
Install-Package Microsoft.EntityFrameworkCore.SqlServer.Design
Final touch, running
Scaffold-DbContext "<Sql connection string>" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models

At this time alarm bells are sounding already. Wait! I only gave it my database connection string, how can it automagically turn this into C# code and work?

If we look at the code to create the sample database in the tutorial above, there are two tables: Blog and Post and they are related via primary key/foreign key as is recommended to create an SQL database. Columns are clearly defined as NULL or NOT NULL and the size of text fields is conveniently Max.



The process created some interesting classes. Besides the properties that map to fields, the Blog class has a property of type ICollection<Post> which is instantiated with a HashSet<Post>. The real fun is the BloggingContext class, which inherits from DbContext and in the override for ModelCreating configures the relationships in the database.
  • Enforcing the required status of the blog Url:
    modelBuilder.Entity<Blog>(entity =>
    {
    entity.Property(e => e.Url).IsRequired();
    });
  • Defining the one-to-many relationship between Blog and Post:
    modelBuilder.Entity<Post>(entity =>
    {
    entity.HasOne(d => d.Blog)
    .WithMany(p => p.Post)
    .HasForeignKey(d => d.BlogId);
    });
  • Having the root sets used to access entities:
    public virtual DbSet<Blog> Blog { get; set; }
    public virtual DbSet<Post> Post { get; set; }

First thing to surprise me, honestly, is that the data model classes are as bare as possible. I would have expected some attributes on the properties defining their state as required, for example. EF Core allows to not pollute the classes with data annotations, as well as an annotation based system. The collections are interfaces and they are only instantiated with a concrete implementation in the constructor. An interesting choice for the collection type is HashSet. As opposed to a List it does not allow access via indexers, only enumerators. It is designed to optimize search: basically finding an item in the hashset does not depend on the size of the collection. Set operations like union and intersects can be used efficiently with Hashset, as well.

Hashset also does not allow duplicates and that may cause some sort of confusion. How does one define a duplicate? It uses IEqualityComparer. However, a HashSet can be instantiated with a custom IEqualityComparer implementation. Alternately, the Equals and GetHashCode methods can be overridden in the entities themselves. People are divided over whether one should use such mechanisms to optimize Entity Framework functionality, but keep in mind that normally EF would only keep in memory stuff that it immediately needs. Such optimizations are more likely to cause maintainability problems than save processing time.

Database first seems to me just a way to work with Entity Framework after using a migration tool. It sounds great, but there are probably a lot of small issues that one has to gain experience with when dealing with real life databases. I will blog about it if I get to doing something like this.

Code first


The code first tutorial goes the other direction, obviously, but has some interesting differences that tell me that a better model of migrating even existing databases is to start code first, then find a way to migrate the data from the existing database to the new one. This has the advantage that it allows for refactoring the database as well as provide some sort of verification mechanism when comparing the old with the new structure.

The setup is similar: use NuGet to install EF in your project:
Install-Package Microsoft.EntityFrameworkCore.SqlServer
then add
"Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"
to the project.json tools section.

Then we create the models: a simple DbContext inheritance, containing DbSets of Blog and Post, and the data models themselves: Blog and Post. Here is the code:
public class BloggingContext : DbContext
{
public BloggingContext(DbContextOptions<BloggingContext> options)
: base(options)
{ }

public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
}

public class Blog
{
public int BlogId { get; set; }
public string Url { get; set; }

public List<Post> Posts { get; set; }
}

public class Post
{
public int PostId { get; set; }
public string Title { get; set; }
public string Content { get; set; }

public int BlogId { get; set; }
public Blog Blog { get; set; }
}

Surprisingly, the tutorial doesn't go into any other changes to this code. There are no HashSets, there are no restrictions over what is required or not and how the classes are related to each other. A video demo of this also shows the created database and it contains primary keys. A blog has a primary key on BlogId, for example. To me that suggests that convention over configuration is also used in the background. The SomethingId property of a class named Something will automatically be considered the primary key (also simply Id). Also, if you look in the code that EF is executing when creating the database (these are called migrations and are pretty cool, I'll discuss them later in the post) Blogs are connected to Posts via foreign keys, so this thing works wonders if you name your entities right. I also created a small console application to test this and it worked as advertised.

Obviously this will not work with every scenario and there will be attributes attached to models and novel ways of configuring mapping, but so far it seems pretty straightforward. If you want to go into the more detailed aspects of controlling your data model, try reading the documentation provided by Microsoft so far.

Entity Framework concepts


We could go right into the code fray, but I choose to first write some more boring conceptual stuff first. Working with Entity Framework involves understanding concepts like persistence, caching, migrations, change saving and the underlying mechanisms that turn code into SQL, of the Unit of Work and Repository patterns, etc. I'll try to be brief.

Context


As you have seen earlier, classes inheriting from DbContext are the root of all database access. I say classes, because more of them can be used. If you want to copy from one database to another you will need to contexts. The context defines a data model, differentiated from a database schema by being a purely programmatic concept. DbContext implements IDisposable so for nuclear operations it can be used just as one uses an open SQL connection. In fact, if you are tempted to reuse the same context remember that its memory use increases with the quantity of data it accesses. It is recommended for performance reasons to immediately dispose a context when finishing operations. Also, a DbContext class is not thread safe. It stands to reason to use context for as short a period as possible inside single threaded operations.

DbContext provides two hooks called OnConfiguring and OnModelCreating that users can override to configure the context and the model, respectively. Careful, though, one can configure the context to use a specific implementation of IModel as model, in which case OnModelCreating will not be called. The other most important functionality of DbContext is SaveChanges, which we will discuss later. Worth mentioning are Database and Model, properties that can be used to access database and model information and metadata. The rest are Add, Update, Remove, Attach, Find, etc. plus their async and range versions allowing for the first time - EF6 did not - to dynamically send an object to a function like Add for example and let EF determine where to add it. It's nothing that sounds very safe to use, but probably there were some scenarios where it was necessary.

DbSet


For each entity type to be accessed, the context should have DbSet<TEntity> properties for that type. DbSet allows for the manipulation of entities via methods like Add, Update, Remove, Find, Attach and is an IEnumerable and IQueriable of TEntity. However, in order to persist any change, SaveChanges needs to be called on the context class.

SaveChanges


The SaveChanges method is the most important functionality of the context class, which otherwise caches the accessed objects and their state waiting either for this method to be called or for the context to be disposed. Important improvements on the EF Core code now allows to send these changes to the database using batches of commands. Before, in EF6 and previously, each change was sent separately so, for example, adding two entities to a set and saving changes would do two database trips. From EF Core onward, that would only take one trip unless specifically configured with MaxBatchSize(number). Revert to the EF6 behavior using MaxBatchSize(1). This applies to SqlServer only so far.

This behavior is the reason why contexts need to be released as soon as their work is done. If you query all the items with a name starting with 'A', all of these items will be loaded in the context memory. If you then need to get the ones starting with 'B', the performance and memory will be affected if using the same context. It might be helpful, though, if then you need to query both items starting with 'A' and the ones starting with 'B'. It's your choice.

One particularity of working with Entity Framework is that in order to update or delete records, you first need to query them. Something like
context.Posts.RemoveRange(context.Posts.Where(p => p.Title.StartsWith("x")));
There is no .RemoveRange(predicate) because it would be impossible to resolve a query afterwards. Well, not impossible, only it would have to somehow remember the predicate, alter subsequent selects to somehow gather all information required and apply deletion on the client side. Too complicated. There is a way to access the database by writing SQL directly and again EF Core has some improvements for this, but raw SQL changes are opaque to an already existing context.

Unit of Work and Repository patterns


The Repository pattern is an example of what I was calling before an alternative to Entity Framework: a separation of data access from business logic that improves testability and keeps distinct responsibilities apart. That doesn't mean you can't do it with EF, but sometimes it feels pretty shallow and developers may be tempted to skip this extra encapsulation.

A typical example is getting a list of items with a filter, like blog posts starting with something. So you create a repository class to take over from the Posts DbSet and create a method like GetPostsStartingWith. A naive implementation returns a List of items, but this actually hinders EF in what it tries to do. Let's assume your business logic requires you to return the first ten posts starting with 'A'. The initial code would look like this:
var posts=context.Posts.Where(p=>p.Title.StartsWith("A")).Take(10).ToList();
In this case the SQL code sent to the database is like SELECT TOP 10 * FROM Posts WHERE Title LIKE 'A%'. However, in a code looking like this:
var repo=new PostsRepository();
var posts=repo.GetPostsStartingWith("A").Take(10).ToList();
will first pull all posts starting with "A" then retrieve the first 10. Ouch! The solution is to return IQueryable instead of IEnumerable or a List, but then things start to feel fishy. Aren't you just shallow proxying the DbSet?

Unit of Work is some sort of encapsulation of similar activities using the same data, something akin to a transaction. Let's assume that we store the number of posts in the Counts table. So if we want to add a post we need to do the adding, then change the value of the count. The code might look like this:
var counts=new CountsRepository();
var blogs=new BlogRepository();
var blog=blogs.Where(b.Name=="Siderite's Blog").First();
blog.Posts.Add(post);
counts.IncrementPostCount(blog);
blog.Save();
counts.Save();
Now, since this selects a blog and changes posts then updates the counts, there is no reason to use different contexts for the operation. So one could create a Unit of Work class that would look a bit like a common repository for blogs and counts. Let's ignore the silly example as well as the fact that we are doing posts operations using the BlogRepository, which is something that we are kind of forced to do in this situation unless we start to deconstruct EF operations and recreate them in our code. There is a bigger elephant in the room: there already exists a class that encapsulates access to database, caches the items retrieved and creates one atomic operation for both changes. It's the context itself! If we instantiate the repositories with a constructor that accepts a context, then all one has to do to atomize the operations is to put the code inside a using block.

There are also controversies related to the use of these two patterns with EF. Rob Conery has a nice blog post suggesting Command/Query objects instead. His rationale is that if you have to pass a context object, as above, there is no much decoupling involved.

I lean towards the idea that you need a Data Access Layer encapsulation no matter what. I would put the using block in a method in a class rather than pass the context or not use a repository. Also, since we saw that entity type is not a good separation of "repositories" - I feel that I should name them differently in this situation - and the intent of the methods is already declared in their name (like GetPosts...) then these encapsulation classes should be separated by some other criteria, like ContentRepository and ForumRepository, for example.

Migrations


Migrations are cool! The idea is that when making changes to structure of the database one can extract those changes in a .cs file that can be added to the project and to source control. This is one of the clear advantages of using Entity Framework.

First of all, there are a zillion tutorials on how to enable migrations, most of them wrong. Let's list the possible ways you could go wrong:
  • Enable-Migrations is obsolete - older tutorials recommended to use the Package Manager Console command Enable-Migrations. This is now obsolete and you should use Add-Migration <Name>
  • Trying to install EntityFramework.Commands - due to namespace changes, the correct namespace would be Microsoft.EntityFrameworkCore.Commands anyway, which doesn't exist. EntityFramework.Commands is version 7, so it shouldn't be used in .NET Core. However, at one point or another, this worked if you added some imports and changed stuff around. I tried all that only to understand the sad truth: you should not install it at all!
  • Having a DbContext inheriting class that doesn't have a default constructor or is not configured for dependency injection - the migration tool looks for such classes then creates instances of them. Unless it knows how to create these instances, the Add-Migration will fail.

The correct way to enable migrations is... to install the packages from the Database First section! Yes, that is right, if you want migrations you need to install
Install-Package Microsoft.EntityFrameworkCore.Tools –Pre
Install-Package Microsoft.EntityFrameworkCore.SqlServer.Design
Only then you may open the Package Manage Console and run
Add-Migration FirstMigration
Note that I am discussing an SQL Server example. It is possible you will need other packages if using a different type of database.

The result is a folder called Migrations in which you will find two files: a snapshot and the migration itself. Here is an example of the snapshot:
[DbContext(typeof(BloggingContext))]
partial class BloggingContextModelSnapshot : ModelSnapshot
{
protected override void BuildModel(ModelBuilder modelBuilder)
{
modelBuilder
.HasAnnotation("ProductVersion", "1.0.0-rtm-21431")
.HasAnnotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn);

modelBuilder.Entity("EFCodeFirst.Blog", b =>
{
b.Property<int>("BlogId")
.ValueGeneratedOnAdd();

b.Property<string>("Url");

b.HasKey("BlogId");

b.ToTable("Blogs");
});

modelBuilder.Entity("EFCodeFirst.Post", b =>
{
b.Property<int>("PostId")
.ValueGeneratedOnAdd();

b.Property<int>("BlogId");

b.Property<string>("Content");

b.Property<string>("Title");

b.HasKey("PostId");

b.HasIndex("BlogId");

b.ToTable("Posts");
});

modelBuilder.Entity("EFCodeFirst.Post", b =>
{
b.HasOne("EFCodeFirst.Blog", "Blog")
.WithMany("Posts")
.HasForeignKey("BlogId")
.OnDelete(DeleteBehavior.Cascade);
});
}
}

And here is one of the migration:
public partial class First : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable(
name: "Blogs",
columns: table => new
{
BlogId = table.Column<int>(nullable: false)
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn),
Url = table.Column<string>(nullable: true)
},
constraints: table =>
{
table.PrimaryKey("PK_Blogs", x => x.BlogId);
});

migrationBuilder.CreateTable(
name: "Posts",
columns: table => new
{
PostId = table.Column<int>(nullable: false)
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn),
BlogId = table.Column<int>(nullable: false),
Content = table.Column<string>(nullable: true),
Title = table.Column<string>(nullable: true)
},
constraints: table =>
{
table.PrimaryKey("PK_Posts", x => x.PostId);
table.ForeignKey(
name: "FK_Posts_Blogs_BlogId",
column: x => x.BlogId,
principalTable: "Blogs",
principalColumn: "BlogId",
onDelete: ReferentialAction.Cascade);
});

migrationBuilder.CreateIndex(
name: "IX_Posts_BlogId",
table: "Posts",
column: "BlogId");
}

protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropTable(
name: "Posts");

migrationBuilder.DropTable(
name: "Blogs");
}
}

Note that this is not something that copies the changes in data, only the ones in the database schema.

Conclusions


Yes, no code in this post. I wanted to explore Entity Framework in my project, but if I would have continued it like that the post would have become too long. As you have seen, there are advantages and disadvantages in using Entity Framework, but at this point I find it more valuable to use it and meet any problems I find face on. Besides, the specifications of my project don't call for complex database operations so the data access mechanism is quite irrelevant.

Stay tuned for the next post in which we actually use EF in ContentAggregator!

Learning ASP.Net MVC series:

  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services


In the setup part of the series I've created a set of specifications for the ASP.Net MVC app that I am building and I manufactured a blank project to start me up. There was quite a bit of confusion on how I would continue the series. Do I go towards the client side of things, defining the overall HTML structure and how I intend to style it in the future? Do I go towards the functionality of the application, like google search or extracting text and applying word analysis on it? What about the database where all the information is stored?

In the end I opted for authentication, mainly because I have no idea how it's done and also because it naturally leads into the database part of things. I was saying that I don't intend to have users of the application, they can easily connect with their google account - which hopefully I will also use for searching (I hate that API!). However, that's not quite how it goes: there will be an account for the user, only it will be connected to an outside service. While I completely skirt the part where I have to reset the password or email the user and all that crap - which, BTW, was working rather well in the default project - I still have to set up entities that identify the current user.

How was it done before?


In order to proceed, let's see how the original project did it. It was first setting a database context, then adding Identity using a class named ApplicationUser.

services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();


ApplicationUser is a class that inherits from IdentityUser, while ApplicationDbContext is something inheriting from IdentityDbContext<ApplicationUser>. Seems like we are out of luck and the identity and db context are coupled pretty strongly. Let's see if we can decouple them :) Our goal: using OAuth to connect with a Google account, while using no database.

Authentication via Google+ API


The starting point of any feature is coding and using autocomplete and Intellisense until it works reading the documentation. In our case, the ASP.Net Authentication section, particularly the authentication using Google part. It's pretty skimpy and it only covers Facebook. Found another link that actually covers Google, but it's for MVC 5.

Enable SSL


Both tutorials agree that first I need to enable SSL on my web project. This is done by going to the project properties, the Debug section, and checking Enable SSL. It's a good idea to copy the https URL and set it as the start URL of the project. Keep that URL in the clipboard, you are going to need it later, as well.



Install Secret Manager


Next step is installing the Secret Manager tool, which in our case is already installed, and specifying a userSecretsId, which should also be already configured.

Create Google OAuth credentials


Next let's create credentials for the Google API. Go to the Google Developer Dashboard, create a project, go to Credentials → OAuth consent screen and fill out the name of the application. Go to the Credentials tab and Create Credentials → OAuth client ID. Select Web Application, fill in a name as well as the two URLs below. We will use the localhost SSL URL for both like this:

  • Authorised JavaScript origins: https://localhost:[port] - the URL that you copied previously
  • Authorised redirect URIs: https://localhost:[port]/account/callback - TODO: create a callback action

Press Create. At this point a popup with the client ID and client secret appears. You can either copy the two values or close the popup and download the json file containing all the data (project id and authorised URLs among them), or copy the values directly from the credentials dialog.



Make sure to go back to the Dashboard section and enable the Google+ API, in the Social APIs group. There is a quota of 10000 requests per day, I hope it's enough. ;)



Writing the authentication code


Let's use the 'dotnet user-secrets' tool to save the two credential values. Run the following two commands in the project folder:

dotnet user-secrets set Authentication:Google:ClientId <client-Id>
dotnet user-secrets set Authentication:Google:ClientSecret <client-Secret>

Use the values from the Google credentials, obviously. In order to get to the two values all we need to do is call Configuration["Authentication:Google:ClientId"] in C#. In order for this to work we need to have loaded the package Microsoft.Extensions.Configuration.UserSecrets in project.json and somewhere in Startup a code that looks like this: builder.AddUserSecrets();, where builder is the ConfigurationBuilder.

Next comes the installation of the middleware responsible for authenticating google and which is called Microsoft.AspNetCore.Authentication.Google. We can install it using NuGet: right click on References in Visual Studio, go to Manage NuGet packages, look for Microsoft.AspNetCore.Authentication.Google ("ASP.NET Core contains middleware to support Google's OpenId and OAuth 2.0 authentication workflows.") and install it.



Now we need to place this in Startup.cs:

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationScheme = "Cookies",
AutomaticAuthenticate = true,
AutomaticChallenge = true,
LoginPath = new PathString("/Account/Login")
});

app.UseGoogleAuthentication(new GoogleOptions
{
AuthenticationScheme="Google",
SignInScheme = "Cookies",
ClientId = Configuration["Authentication:Google:ClientId"],
ClientSecret = Configuration["Authentication:Google:ClientSecret"],
CallbackPath = new PathString("/Account/Callback")
});

Yay! code!

Let's start the website. A useful popup appears with the message "This project is configured to use SSL. To avoid SSL warnings in the browser you can choose to trust the self-signed certificate that IIS Express has generated. Would you like to trust the IIS Express certificate?". Say Yes and click OK on the next dialog.



What did we do here? First, we used cookie authentication, which is not some gluttonous bodyguard with a sweet tooth, but a cookie middleware, of course, and our ticket for authentication without using identity. Then we used another middleware, the Google authentication one, linked to the previous with the "Cookies" SignInScheme. We used the ClientId and ClientSecret we saved previously in the Secret Manager. Note that we specified an AuthenticationScheme name for the Google authentication.

Yet, the project works just fine. I need to do one more thing for the application to ask me for a login and that is to decorate our one action method with the [Authorize] attribute:

[Authorize]
public class HomeController : Controller
{
public IActionResult Index()
{
return View();
}

}

After we do that and restart the project, the start page will still look blank and empty, but if we look in the network activity we will see a redirect to a nonexistent /Account/Login, as configured:


The Account controller


Let's create this Account controller and see how we can finish the example. The controller will need a Login method. Let me first show you the code, then we can discuss it:

public class AccountController : Controller
{
public IActionResult Login(string ReturnUrl)
{
return new ChallengeResult("Google",new AuthenticationProperties
{
RedirectUri = ReturnUrl ?? "/"
});
}
}


We simply return a ChallengeResult with the name of the authentication scheme we want and the redirect path that we get from the login ReturnUrl parameter. Now, when we restart the project, a Google prompt welcomes us:

After clicking Allow, we are returned to the home page.



What happened here? The home page redirected us to Login, which redirected us to the google authentication page, which then redirected us to /Account/Callback, which redirected us - now authenticated - to the home page. But what about the callback? We didn't write any callback method. (Actually I first did, complete with a complex object to receive all the parameters. The code within was never executed). The callback route was actually defined and handled by the Google middleware. In fact, if we call /Account/Callback, we get an authentication error:


One extra functionality that we might need is the logout. Let's add a Logout method:

public async Task<IActionResult> LogOut()
{
await HttpContext.Authentication.SignOutAsync("Cookies");

return RedirectToAction("index", "home");
}

Now, when we go to /Account/Logout we are redirected to the home page, where the whole authentication flow from above is being executed. We are not asked again if we want to give permission to the application to use our google credentials, though. In order to reset that part, go to Apps connected to your account.

What happens when we deny access to the application? Then the callback action will be called with a different set of parameters, triggering a RemoteFailure event. The source code on GitHub contains extra code that covers this scenario, redirecting the user to /Home/Error with the failure reason:

Events = new OAuthEvents
{
OnRemoteFailure = ctx =>
{
ctx.Response.Redirect("/Home/Error?ErrorMessage=" + UrlEncoder.Default.Encode(ctx.Failure.Message));
ctx.HandleResponse();
return Task.FromResult(0);
}
}

What about our user?


In order to check the results of our work, let's add some stuff to the home page. Mainly I want to show all the information we got about our user. Change the index.cshtml file to look like this:

<table class="table">
@foreach (var claim in User.Claims)
{
<tr>
<td>@claim.Type</td>
<td>@claim.Value</td>
</tr>
}
</table>

Now, when I open the home page, this is what gets returned:

http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier 111601945496839159547
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname Siderite
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname Zackwehdex
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name Siderite Zackwehdex
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress sideritezaqwedcxs@gmail.com
urn:google:profile https://plus.google.com/111601945496839159547


User is a System.Security.Claims.ClaimsPrincipal object, that contains not only a simple bag of Claims, but also a list of Identities. In our example I only have an identity and the User.Claims are the same with User.Identities[0].Claims, but in other cases, who knows?

Acknowledgements


If you think it was easy to scrap up this simple example, think again. Before the OAuth2 system there was an OpenID based system that used almost the same method and class names. Then there is the way they did it in .NET proper and the way they do it in ASP.Net Core... which changed recently as well. Everyone and their grandmother have a blog about how to do Google authentication, but most of them either don't apply or are obsolete. So, without further ado, let me give you the links that inspired me to do it this way:

Final thoughts


By no means this is a comprehensive walkthrough for authentication in .NET Core, however I am sure that I will cover a lot more ground in the posts to come. Stay tuned for more!

Source code for the project after this chapter can be found on GitHub.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

After I've spent a day of writing working on the application I realized that many of the concepts I take for granted have not been discussed. Consider this part as an introduction to the things *I* know about ASP.Net MVC. :)

Emveesee


The Model View Controller pattern attempts to separate three different concerns of the application: the flow (Controller), the display and the user interface (View) and the various data objects that are passed, validated and manipulated (Model), which are also responsible for the logic and rules of the application. In ASP.Net, MVC means:
  • the models are POCOs, for which validation constraints, display options and other aspects of how they are intended to be used are expressed with attributes decorating the classes or their properties.
  • the controllers are classes inheriting from Controller, their names ending with "Controller". Their methods are called controller actions and represent endpoints for HTTP calls. Attributes are again used to configure these actions, like if they need to be accessed by POST or PUT. The responsibility of controllers is to... well... control the action in the application.
  • the views are files with the .cshtml extension. They are found in the Views folder and the convention is that the view for a controller action is found in /Views/ControllerName(without "Controller")/ActionName. While I guess someone could hack MVC to use the old ASP.Net engine, the preferred engine for views is Razor (the one with the mustaches). The direction of the ASP.Net MVC views and templates is fine control over the generated markup, as opposed to the old ASP.Net way of encapsulating everything in server side user controls. The preferred way to encapsulate control behavior is now client side, with frameworks helping with it like AngularJS and ReactJS.

Add to this services and middleware. Services are encapsulation of logic. For example a service may determine aspects of flow of data from the controller to the view or validate the values of a model class. While these two services would be part of the Controller and Model parts, respectively, in code they are pieces of code, usually implementations of interfaces - for testing and dependency injection reasons - that determine their purpose. Middleware are components - they can be composed - that react to what happens in the HTTP stream (requests and responses). Most of what MVC does internally depends on middleware and services.

MVC controversies


The ugly truth is that MVC has been around for 30 years and people implement it differently every time. Because it spans a large domain (it handles everything, basically) the vague wording used to describe the pattern causes a lot of confusion. Why did Microsoft choose MVC for their new version of ASP.Net? Well, first of all because their first attempt - ASP.Net Forms, which tried to bring the desktop application development style to the web - failed miserably. Second, because at the time they were making the decision, Ruby on Rails was the coolest thing since man discovered fire. I mean, it made a shitty programming language like Ruby look useful! (Just kidding, irate Ruby developer. Was trying to see if you're paying attention). People are still fighting over the question if the model is supposed to handle the application logic or the controller or even a separate part (services?).

My own interpretation is that models are mostly data classes. They should have code that handles their own internal state, but nothing else. The controller controls the flow of the data, meaning that if the user accesses an action, the method will direct execution towards the correct component handling that action. Therefore, for me, application logic is neither in the controller or the models.

Imagine a family: the wife tells the husband "go to the market and buy 10 eggs and some tomatoes!". Wifey is the user, the husband is the controller. He understands the intent of the user and directs execution towards its implementation. Now, the husband could go to the market and buy the eggs himself, but that would be bad form (heh heh), so he goes to his two sons and tells them "Frank and Joe, go to the market! Frank, get me some eggs, 10 of them. Joe, get me some tomatoes for a salad. Now, git!" (get it? git? I am on a roll). At this point the sons are confused: are they Model or are they Controller? Meanwhile the eggs and tomatoes are clearly part of the model. An egg may spoil, for example, and that is probably the responsibility of the egg. You may consider that the market basket containing eggs and tomatoes is the model, conveniently leaving aside the functionality when the user sees the quality of the purchases and chastises the poor controller for it.

Certainly, ASP.Net MVC leans towards my interpretation of things. Classes in the Models folder in the default application are just classes with decorated properties and the piece of code that interprets the attributes and their values, that binds parameters to properties, that is a service. The code that does stuff, after the controller determined it's OK to be executed, are again managers and services. For example there are sign-in and user managers in the code, which are implementations from the .Net code itself. If one inlines all of them, it looks to me as if the controller is taking care of the logic of the application, not the model.

Convention over configuration


ASP.Net MVC embraced the Convention over configuration paradigm. You don't need to hook up controllers anywhere, or define the dependency between views and controllers. A controller for movies will be a class called MoviesController, placed in the Controllers folder and the convention is that every call to its actions would start with /movies. A view for a List action would be placed in /Views/Movies/List.cshtml and expected to be called as http(s)://host:port/movies/list. A typical code would look like this:
public IActionResult List()
{
return View();
}
View is a shorthand for rendering the view of this method by naming convention.

The pipeline for MVC is based on what they call middleware - what in the old ASP.Net were called handlers, I guess. The main component of an MVC application is its WebHostBuilder, which then uses a class to configure itself, which is usually named Startup. The methods and properties in Startup will be executed/populated using dependency injection, meaning that parameters will be interfaces: their specific implementation will be determined by ASP.Net MVC based on user configuration, if any.

Same thing applies to action parameters. Values are obtained from the body of the call or from the HTTP GET or POST parameters. An action like List(int id, string name) will get the correct parameters from a call like /list?id=1&name=Steven. Based on the routing (the default one: {controller=Home}/{action=Index}/{id?} being the one responsible for the most common REST conventions), the same result can be achieved with /list/1?name=Steven, for example. The values can be retrieved also for a method looking like this: List(User user), if the User class has Id and Name properties. This is called model binding and most of the interaction between the browser and the .NET code at the backend will be done through it.

Extension methods: dependency injection and middleware


The pattern of configuration for your application is to have it built by fluent methods, using the so called builder pattern. You start with the WebHostBuilder, for example, then you .UseKestrel, .UseIISIntegration, .UseStartup, etc. The default template code looks like this:
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();

host.Run();
These methods are extension methods, their complex functionality hidden behind this simple pattern of use. Check out the simple .AddMvc() method, how it deceptively covers so much complexity. And in the source code, other extension methods, each with their own complexity, but eventually leading to either injecting some dependency or configuring and adding middleware to the MVC pipeline. It seems to me that dependency injection methods start with Add, while the middleware inserting methods start with Use.

As an example, let's take one of the lines inside .AddMvc(): builder.AddRazorViewEngine();. Following one of the many branches defined by this extension pattern (I am still not sure how much I like it) we get to MvcRazorMvcCoreBuilderExtensions.AddRazorViewEngineServices, which injects a lot of dependencies. Take a look at
// This caches compilation related details that are valid across the lifetime of the application.
services.TryAddSingleton<ICompilationService, DefaultRoslynCompilationService>();
. One can change the implementation of the compilation service! Alternately, let's look at .UseStaticFiles(). It's a wrapper over
app.UseMiddleware<StaticFileMiddleware>();

Open source .NET Core


As you've seen from the examples above, I've often linked to the source code of methods and classes. That is because finally Microsoft decided to open source the .NET Core code and thus both let people find and solve their problems and allow developers to find where pesky hard to explain bugs are coming from. The extension method pattern is making difficult to explore what is going on, as you have to switch from one project to another on the GitHub interface (or your own file system, depending on how you decide to work). Dependency injection makes it even harder, as you have to first find the interface responsible for your current programming task, then find all implementations and what injected them in the first place. I tried to find some decent exploring tool, but found none and I am too busy to make one of my own. Homework? :)

Even so, it is a great boon that one can look into the innards of the Microsoft code. It not only helps pinpoint issues, but also teaches about how one of the biggest software companies writes code. I don't want to dissect middleware in this post, but I strongly suggest you take a look at how they are made and how they are working. Whenever I find it's useful, I will mention the middleware responsible with what I am discussing, so try to make an effort to look its source code and see what it actually does.

Attributes


Attributes are used all over ASP.Net MVC. They tell what HTTP method to accept for controller actions, how to authorize access, how to validate models, how to bind the incoming parameters to models. Here is en example:
//The user needs to be authorized to access this method
[Authorize]
//only POST requests
[HttpPost]
// over HTTPS are accepted
[RequireHttps]
//The URL for this method will be /util not /Hardcore
[ActionName("util")]
//controller method
public IActionResult Hardcore([FromBody] /*the data will be taken from the body of the request*/ HardcoreData data)
{
//only show the view if the model is valid
if (ModelState.IsValid) return View();
//otherwise return a bad request
return BadRequest(ModelState);
}

public class HardcoreData
{
// value needs to be set (not null)
[Required]
// the format of Id needs to be a URL
[DataType(DataType.Url,ErrorMessage = "The Id need to be a URL")]
// shorter or equal than 500 characters
[StringLength(500,ErrorMessage ="Maximum URL length is 500 characters")]
public string Id { get; set; }

//Range validation from 0 to 100
[Range(0,100,ErrorMessage ="Value needs to be between 0 and 100 and even")]
//Custom validation using the class and method mentioned
[CustomValidation(typeof(HarcoreDataValidator),"ValidateValue")]
public int Value { get; set; }
}

public class HarcoreDataValidator {
public static ValidationResult ValidateValue(int value)
{
return value % 2 == 0
? ValidationResult.Success
: new ValidationResult("Value needs to be even");
}
}

These attributes will be read and used by various services injected at startup. Everything can be changed, so for example you may change the validation system to interpret the RangeAttribute values differently or ignore RequiredAttribute or use custom attributes. Attribute classes only mark the intent of the developer, but do almost nothing themselves.

Models


I've mentioned previously that models are used to move data back and forth. Model binding is responsible for taking HTTP requests and turning their parameters into C# classes. Services then use those models, like EntityFramework, or the validation system or the Razor views. You've seen in the previous example how an object may be read from the body of a request. Similarly, they can be read from the HTTP parameters sent to the action method. Read an example of an investigation to see how various methods of model binding can be used with different attributes.

An important use case for models is validation. Some of it was demonstrated above. Read more in the documentation. An interesting part of it is the client validation that is implemented out of the box with the right javascript imports and using the right attributes.

Views


In ASP.Net Forms, the code and the presentation were (somewhat) separated into .aspx markup and .cs codebehind. The aspx syntax is probably isomorphic with the Razor syntax and I remember that at one time you could use ASP.Net Forms with Razor. In MVC, views have code in them, using Razor, but they are not strongly coupled with a specific piece of code. In fact, one can reuse a view for multiple models, especially the partial ones - which take over from UserControls, I guess. So in fact there is quite an overlap between ASP.Net Forms and MVC, if you add a separate injection mechanism to Forms in order to decouple markup and codebehind.

For views, the biggest difference as far as I am concerned is the encapsulation of reusable content, what before were controls. Panels, Grids, UserControls, all of them inherited from a Control class that handled the various ASP.Net phases of its lifecycle. There was no job interview in which you weren't asked about the ASP.Net lifecycle and now it's irrelevant. Nowadays, you render things from the markup up, with focus on the client side. HTML helpers and Tag Helpers are what allows you to encapsulate some rendering logic.

What caused this switch to a new paradigm? Well, I would say HTML5 and javascript frameworks. You would have a wonderful Grid control rendering a nice table layout and the developer would shout foul because he wants everything with DIVs. You would have a nice Calendar extension control and the dev would dismiss it immediately because he wasn't to use the latest jQueryUI client side calendar. Most of all, it would be because the web designer would use Microsoft agnostic tools that create pure HTML and then the poor dev would have to reverse engineer that in order to get the same layout with default controls. Today a grid is only a DIV, a Razor @foreach and a template for the rows using the values of the items displayed. Certainly all of this can be encapsulated further into your own library of HTML helpers, but you would have complete control over it.

In ASP.Net MVC Core partial views will be superseded by View Components. If you thought the Microsoft interpretation of MVC was a little vague, this will make your head explode. View Components are most similar to controllers, only you can't call them directly via HTTP, are not part of the controller lifecycle and can't use filters. They have views associated with them in /Views/ControllerName or Shared/Components/ViewComponentName/ViewName. You may invoke them directly from a controller or from a view, using the wonderfully ridiculous syntax
@Component.InvokeAsync("Name of view component", <anonymous type containing parameters>)

If not specified by the user, views are discovered by their location in the project. Specifying the view means specifying the exact path of the cshtml file, an ugly and not recommended solution. Otherwise, when you just return View();, MVC looks in /Views/ControllerName/ViewName.cshtml and then in /Views/Shared/ViewName.cshtml. As we are accustomed, this default behavior can be changed by implementing a different IViewLocationExpander.

You may specify a model type for a view, which helps a lot with Intellisense. Otherwise, you may render server side data using the viewbag ViewData or you may use the @model keyword, which allows dynamic use of properties, but doesn't help much with Intellisense. Using the wrong property name will generate runtime errors.

Needless to say, before I actually go into the code, views are a bit of a mystery to me as well. They also clash a bit with the architecture of an application that makes most sense to me: API + client side code. I feel I need to discuss this, so...

Types of MVC application architecture


In .NET proper ASP.Net MVC and ASP.Net Web API were two different things, with huge overlap of functionality. In .NET Core, they are the same thing, gathered under the umbrella of MVC. A controller action can receive AJAX calls in JSON format and return properly formatted HTML5 markup, for example. It is very difficult to find a reasonable way to separate the two concepts in .NET. However, there are two major ways of using them, separating them by use, as it were. The MVC application that uses controllers and views is still a product of turning ASP.Net Forms into a Ruby on Rails clone. While the overall architecture of the application has changed, giving more control to the developers, it also constrains them into a type of functional architecture that may be - frankly - obsolete already.

There are three types of architectures that I will discuss for a very simple application that displays news items using their title, description, url and image:
  1. starting from an ASP.Net Forms page and a list of NewsItem objects in C#, we use an .aspx page that contains a Grid control. We define the way the title is rendered as a link, the description as a short text block and the image as a side thumbnail.
  2. starting from an ASP.Net MVC controller and a list of NewsItem objects in C#, we render a view which uses a Razor @foreach to display sections with a title link, a description and a thumbnail.
  3. starting from an HTML page, we fire an AJAX call to a .NET API that returns a Javascript array of NewsItem objects, that then we render as sections with a title link, a description and a thumbnail, maybe by using an MVC client-side library like AngularJS.

See what I mean? The first two versions are basically the same. Whether the mechanism for rendering comes from a Forms Control or from a Razor loop is irrelevant to the overall design of the app. The third, though, presents some very interesting ideas:
  • The website is not a .NET website. It's pure HTML. It can be served from any type of server, on any platform, can be created with any tools.
  • There is no visual interface on the actual .NET server side. It's a simple API that sends and receives data in serialized form.
  • The MVC architecture moves towards the client side, where views, models and controllers are just Javascript code, HTML and CSS.
  • There is a very clear functional separation of concerns. There is server side development: C#, serialization and persistence of data, sensitive or resource intensive processing, the good ole things that .NET developers love. And there is the client side development: HTML, Javascript, CSS, responsive design, native mobile apps and all that crap that designers and frontend developers do.

(Again, joking, dear frontend or mobile developer! Just making sure you weren't asleep)

The conclusions are staggering, actually. With no concern for presentation, the server side API framework can be incredibly tiny. Efforts can be turned towards making it efficient, fast, secure, using less resources, being scalable. There is no need for a Razor engine, HTML helpers, partial views, View Components, no one cares about them. Instead what it enables is working with any kind of client side user interface. Mobile native apps from all platforms, multiple web sites, other APIs, they all could just attach to the API and present functionality. Meanwhile, the client side interface developer is exempt of all the dependencies on Microsoft tools, of concerns over how many servers they are, where they are located, the general background functionality of the application.

I've worked in such a way, it was great! People working on iOS, Android and web applications would just come to me and ask for an API that does this and that. After days of fighting over how the API signature should look :) everyone would just do what they are good at. Even more, because we were so different, bugs were easier to discover when we tried to connect our work over this simple interface.

The downsides are blessings in disguise. As an API call needs to finish quickly and return small quantities of data, the developer is forced to consider from the get go things like: pagination, chunking, asynchronous programming, concurrency, etc. Instead of importing a list of URLs for the news and then waiting for the output of the page while the server side is spidering the data, the app needs to show "importing data, please wait" and then periodically query the API if the import is finished. When hundreds of people try to do this, there is no problem, as the list of links to spider just grows and the same process extracts the data. If two users import the same links, they only get spidered once.

Even if the application that we will be working on is not based on this design, consider from the beginning if you even need to use ASP.Net in an MVC way. The world is moving away from "applications" and towards "services". Perhaps the API itself would only be a front that accesses other APIs in the background, microservices that are optimized perfectly for the tiny bit they perform.

Data Access Architecture


A small rant against ORMs


Just like with the overall design, one may use different ways of accessing data. The ASP.Net MVC guide for working with data suggests a single clear path: the Microsoft ORM Entity Framework. As I am still to use it in any serious capacity, I will not explain EF concepts here. I will ask you a question instead: do you even need an ORM?

Object Relational Mappers are tools that abstract the database from the viewpoint of a developer. They work with contexts and sets and strongly types objects and have great Intellisense support. Started as a way to map an existing database to a .NET data framework, Entity Framework now goes the other direction: code first! You start writing your app, using Entity Framework just like you would already have everything you need and it creates and maintains the database in the background. Switching from SQL Server to PostgreSQL or SQLite or even a custom data persistence method is a breeze. They sound great!

However, if you already know what persistence model you use and are proficient in designing and optimizing the data structure there, using an ORM starts to lose its appeal. Just as with ASP.Net Forms, you have no control over the way the ORM chooses to communicate with the database. It may do better than you or it may do horribly bad. You start developing your app, everything works fine, you add feature after feature and when you finally load the actual real life data something goes wrong and you have no idea what and where.

There already are patterns of abstracting the data access and usually it involves using the data from a separate library (or service) that encapsulates the desired behavior and is structured by intention. Why would I get all NewsItems, when there are millions of them I and in no situation I can conceive would I need all of them? Why would I get a NewsItem by Id, when the Id means nothing to me and things like the URL are more relevant? Why would I choose to store in memory all the items I want to delete, when my condition for them to disappear is a simple WHERE condition?

OK, OK, I know that if you worked with Entity Framework you have a lot of (good) answers to all of these questions. Yet my original question still needs to be considered before you embark on your development journey: do you even need Entity Framework?

The main disadvantages I see for Entity Framework specifically are:
  • It diffuses the API for working with data. Instead of writing a NewsItemManager class that gives you items by url and by date, for example, the developer is tempted to write custom queries inside the logic of the application. This leads to difficulties refactoring the code or redesigning the application.
  • It hides the complexity of the database. Instead of working with the actual stored data, you work with an abstraction that may look good to you, but hides problems that you are tempted to ignore.
  • It forces switches of competencies. If you want to debug and optimize your data access you now need an Entity Framework expert, rather than a database expert.
  • It causes technical debt that you are not even aware of. From this list, I believe this to be the most insidious. There is a chance, that may be very small, that your application needs a functionality that Entity Framework was not designed for. EF works great in any other area except that one. And when you try to fix it, you have several options that are all horrible: create a separate system for it, hack Entity Framework into submission, leave it slow and bad because everything works so well otherwise. At this point, when you notice there is a problem, it's already too late

In our application I will gladly use Entity Framework. It seems some of the basic functionality of MVC, like identity, are strongly designed to work together with the Entity Framework data abstraction. Yet even so, I will try to abstract the data layer - mostly because I have no need to implement it for this demo. This will probably lead to an interesting consequence: the default MVC modules will use EF in a way, while I will use it for my application in another way.

Entity Framework concepts


An actual advantage of EF that I think is great is the concept of migrations. EF is able to save modifications to the data layer as C# code files that can be added to source control. This helps a lot when working in teams of multiple people.

As an aside, I was working for a project that used stored procedures to access the database. The data access layer was getting and changing data using these functions and procedures that were saved in a folder as .SQL create files. It was easy during deployment to delete all procedures and functions and then recreate them, but how about database schema or data changes? For this we used a folder of .SQL changes. For each file we needed to create also a rollback file, to "fix" whatever this was doing. They were difficult to manage at first, but after a while you got the hang of it. I wonder if Entity Framework allows for this kind of workflow. That would be great. Aside over.

The root of an EF model seems to be the context. Inheriting from DbContext (or as in the default template app from IdentityDbContext<ApplicationUser>, coupling it inexorably with the identity of the user), this class need not have a lot of own code at first. As time goes by, one point changes to the data mechanism are probably hooked here. The DbContext will have properties of type DbSet<SomeEntity> which will be used to queries said entities. A simple services.AddDbContext<MyDbContext>() in the startup class declares your willingness to work with a context or another.

A mix of conventions and attributes defines the mapping between your context and the underlying database. A good link to explore this can be found here.

Another interesting quality of Entity Framework is that you can use it in memory, very useful with automated testing. Here is a link that explains it.

Using LInQ to Entities and the DbSet properties of the context, one can create, read, update and delete records, but there are some differences from what you may be used to. The delete or update operations by default need to first retrieve the items, then alter them. A good intro to the changes in Entity Framework 7 can be found here.

The pattern used by Entity Framework is called "unit of work". If you want to go down the rabbit hole, look it up. A nice article about it and some possible improvements can be found here.

An interesting reason for using Entity Framework would be for when you don't have a lot of control over your persistence medium. I haven't worked with "the cloud" yet, but basically they give you some services and tax you for using them. If EF can abstract that away and minimize cost, it would be a boon, but I have no information about this.

Miscelaneous


The post cannot be complete without some concepts like:
In this series I will not go into details for many of them, so read the info the .NET team has prepared for each subject.

Leaving so soon?


The next post will be about authentication, more exploratory and with code examples.

Following my post about things I need to learn, I've decided to start a series about writing an ASP.Net MVC Core application, covering as much ground as possible. As a result, this experience will cover .NET Core subjects and a thorough exploration of ASP.Net MVC, plus some concepts related to Visual Studio, project structure, Entity Framework, HTML5, ECMAScript 6, Angular 2, ReactJs, CSS (LESS/SASS), responsive design, OAuth, OData, shadow DOM, etc.

Learning ASP.Net MVC series:
  1. Setup
  2. MVC Concepts
  3. Authentication
  4. Entity Framework Fundamentals
  5. Upgrading project to .NET Core 1.1
  6. Dependency Injection and Services

Specifications


In order to start any project, some specifications need to be addressed. What will the app do and how will it be implemented? I've decided on a simplified rewrite of my WPF newsletter maker project. It gathers subjects from Google, by searching for configurable queries, it spiders the contents, it displays them, filters them, sorts them, extracting text and analyzing content. It remembers the already loaded URLs and allows for marking them as deleted and setting a category. It will be possible to extract the items that have a category into a newsletter containing links, titles, short descriptions and maybe a picture.

The project will be using ASP.Net Core MVC, combining the API and the display in a single web site (at least for now). Data will be stored in SQLite via Entity Framework. Later on the project will be switched to SQL Server to see how easy that is. The web site itself will have HTML5 structure, using the latest semantic elements, with the simplest possible CSS. All project owned Javascript will be ECMAScript6. OAuth might be needed for using the Google Search API, and I intend to use Google/Facebook/Twitter accounts to log in the application, with a specific account marked in the configuration as Administrator. The IDE will be Visual Studio (not Code). The Javascript needs to be clean, with no CSS or HTML in it, by using CSS classes and HTML templates. The HTML needs to be clean, with no Javascript or styling in it; moreover it needs to be semantically unambiguous, so as to be easily molded with CSS. While starting with a desktop only design, a later phase of the project will revamp the CSS, try to make the interface beautiful and work for all screen formats.

Not the simplest of projects, so let's get started.

Creating the project


I will be using Visual Studio 2015 Update 3 with the .Net Core Preview2 tooling. Personally I had a problem installing the Core tools for Visual Studio, but this link solved it for me with a command line switch (short version: DotNetCore.1.0.0-VS2015Tools.Preview2.exe SKIP_VSU_CHECK=1). First step is to create a New Project → Visual C# → .NET Core → ASP.NET Core Web Application. I will name it ContentAggregator. To the prompt asking which type of project template I want to choose, I will select Web Application, deselect Microsoft Azure Host in Cloud checkbox which for whatever reason is checked by default and click on Change Authentication to select Individual User Accounts.



Close the "Welcome to ASP.Net Core" page, because the template will be heavily changed by the time we finish this chapter.

The default template project


For a more detailed analysis of a .NET Core web project, try reading my previous post of the dotnet default template for web apps. This one will be quick and dirty.

Things to notice:
  • There is a global.json file that lists two projects, src and test. So this is how .NET Core solutions were supposed to work. Since the json format will be abandoned by Microsoft, there is no point of exploring this too much. Interesting, though, while there is a "src" folder, there is no "test".
  • The root folder contains a ContentAggregator.sln file and the src folder contains a ContentAggregator folder with a ContentAggregator.xproj file. Core seems to have abandoned the programming language dependent extension for project files.
  • The rest of the project seems to be pretty much the default dotnet tool one, with the following differences:
  • the template uses SQL Server by default
  • the lib folder in wwwroot is already populated with libraries

So far so good. There is also the little issue of the database. As you may remember from the post about the dotnet tool template, there were some files that needed to initialize the database. The error message then said "In Visual Studio, you can use the Package Manager Console to apply pending migrations to the database: PM> Update-Database". Is that what I have to do? Also, we need to check what the database settings are. While I do have an SQL Server instance on this computer, I haven't configured anything yet. The Project_Readme.html page is not very useful, as the link Run tools such as EF migrations and more goes to an obsolete link on github.io (the documentation seems to have moved to a microsoft.com server now).

I *could* read/watch a tutorial, but what the hell? Let's run the website, see what it does! Amazingly, the web site starts, using IIS Express, so I go to Register, to see how the database works and I get the same error about the migrations. I click on the Apply Migrations button and it says the migrations have been applied and that I need to refresh. I do that and voila, it works!

So, where is the database? It is not in the bin folder as WebApplication.db like in the Sqlite version. It's not in the SQL Server, the service wasn't even running. The DefaultConnection string looks like "Server=(localdb)\\mssqllocaldb;Database=aspnet-ContentAggregator-7fafe484-d38b-4230-b8ed-cf4a5a8df5e1;Trusted_Connection=True;MultipleActiveResultSets=true". What's going on? The answer lies in the SQL Server Express LocalDB instance that Visual Studio comes with.

Changing and removing shit


To paraphrase Antoine de Saint-Exupéry, this project will be set up not when I have nothing else to add, but when I have nothing else to remove.

First order of business it to remove SQL Server and use SQLite instead. Quite the opposite of how I have pictured it, but hey, you do what you must! In theory all I have to do it replace .UseSqlServer with .UseSqlite and then adjust the DefaultConnection string from appsettings.json with something like "Data Source=WebApplication.db". Done that, fixed the namespaces and imported projects, ran the solution. Migration error, Apply Migrations, re-register and everything is working. WebApplication.db and everything.

Second is to remove all crap that I don't need right now. I may need it later, so at this point I am backing up my project. I want to remove:
  • Database - yeah, I know I just recreated it, but I need to know what those migrations contained and if I even need them, considering I want to register with OAuth only
  • Controllers - probably I will end up recreating them, but we need to understand why things are how they are
  • Models - we'll do those from scratch, too
  • Services - they were specific to the default web site, so poof! they're gone.
  • Views - the views will be redesigned completely, so we delete them also
  • Client libraries - we keep jQuery and jQuery validation, but we remove bootstrap
  • CSS - we keep the site.css file, but remove everything in it
  • Javascript - keep site.js, but empty
  • Other assets like images - removed

"What the hell, I read so much of this blog post just for you to remove everything you did until now?" Yes! This part is the project set up and before its end we will have a clean white slate on which to create our masterpiece.

So, action! Close Visual Studio. Delete bin (with the db file in it) and obj, delete all files in Controllers, Data, Models, Services, Views. Empty files site.css and site.js, while deleting the .min versions, delete all images, Project_Readme.html and project.lock.json. In order to cleanly remove bootstrap we need to use bower. Run
bower uninstall bootstrap
which will remove bootstrap, but won't remove it from bower.json, so remove it from there. Reopen Visual Studio and the project, wait until it restores the packages.

When trying to compile the project, there are some errors, obviously. First, namespaces that don't exist anymore, like Controllers, Models, Data, Services. Remove the offending usings. Then there are services that we wanted to inject, like SMS and Email, which for now we don't need. Remove the lines that try to use them under // Add application services. The rest of the errors are about ApplicationDbContext and ApplicationUser. Comment them out. These are needed for when we figure out how the project is going to preserve data. Later on a line in Startup.cs will throw an exception ( app.UseIdentity(); ) so comment it out as well.

Finishing touches


Now the project compiles, but it does nothing. Let's finish up by adding a default Controller and a View.

In Visual Studio right click on the Controllers folder in the Solution Explorer and choose Add → Controller → MVC Controller - Empty. Let's continue to name it HomeController. Go to the Views folder, create a new folder called Home. Now you might think that right clicking on it and selecting Add → View would work, but it doesn't. The Add button stubbornly remains disabled unless you specify a template and a model and other stuff. It may be useful later on, but at this stage ignore it. The way to add a view now is go to Add → New Item → MVC View Page. Create an Index.cshtml view and empty its contents.

Now run the application. It should show a wonderfully empty page with no console errors. That's it for our blank project setup. Check out the source code for this point of the exploration. Stay tuned for the real fun!

A friend of mine has been employed to write code in the new Microsoft cross-platform ASP.Net Mvc Core and one day he asked be about the difference between the [Required] and the [BindRequired] attributes. Not being an expert in ASP.Net MVC and even less in the Core version, I had no idea so I went exploring.

Fast and dirty

I created a small project and ran it several times to see what the attributes were doing. RequiredAttribute seemed to work quite straightforward: decorate a property of an object with it and, when used as a parameter in an MVC API controller method, that property needs to be set. If it is not, the Controller ModelState.IsValid value will be set to false. Something like this:
public IActionResult Test([FromBody]TestModel model)
TestModel is a simple POCO with Required and BindRequired attributes set on its properties:
public class TestModel
{
[Required]
public string A { get; set; }
[BindRequired]
public string B { get; set; }
}
Now, if I send an object like this
{ A:'Something',B:'Something else' }
I get a true ModelState.IsValid value. If I send only property A, I also get a valid model. If I send only B or an empty object, IsValid will be false. So [Required] works as expected, [BindRequired] doesn't seem to do anything.

Searching through the methods of the controller I noticed one in particular called TryUpdateModelAsync. It tries to populate a model, given as a parameter, with values from the URL or form parameters for the action method. That means if you call the API with something like /api/test/test&A=Something&B=SomethingElse then run
var valid = TryUpdateModelAsync(model).Result;
, valid will be true. However, if you fail to specify a B parameter, valid will be false. Note that the properties of the model change either way. Property A will be filled with whatever you send it, only the result of the attempt will be false.

My friend was not completely satisfied with what I discovered, but it was what I could do in a few minutes of work.

More work

Frankly, I wasn't satisfied either. The only mention I could find of this BindRequired attribute was in the ASP.Net Core documentation, thrown in a list together with [FromBody], which applied to parameters, not object properties, and in unit tests code. So I started digging.

I went to the GitHub repository for aspnet/Mvc and downloaded the source. I then looked for a class named BindRequiredAttribute. It is a very simple class that inherits from BindingBehaviorAttribute and sends BindingBehavior.Required as the base constructor parameter. Its description is "Indicates that a property is required for model binding. When applied to a property, the model binding system requires a value for that property. When applied to a type, the model binding system requires values for all properties of that type." This doesn't say much. So I started to go through the chain of extension methods, interfaces, dependency injection. It wasn't pretty. Let's just say that after a few pages of blog post where I described wandering through classes and properties and trying everything - you wouldn't have understood anything because I didn't - I've decided to delete everything and search the web for more information.

In the end, experimenting with the project is what made it clear(er) for me. In the example I created, the Test method received an object [FromBody], but if I removed that attribute, the content of the call to the API was ignored. Instead, the values from the URL or form POST would be used to fill the object - this explains why FromBody and BindRequired were grouped in the same list. So for the same code without [FromBody]:
public IActionResult Test(TestModel model)
and now the request /api/test/test&A=Something&B=SomethingElse fills the object without the need for content (and ignoring it if it exists). If I remove either A or B, ModelState.IsValid becomes false.

What does it mean?


To me it feels as the [Required] attribute is all that I need since it works in both cases: form values and content, however besides [BindRequired], there is also [BindNever], which removes that property from binding. Imagine you would have a property like "IsAdmin" and you would set it to true programatically - you don't want it to be bound from the URL of the call. I was expecting that attempting to set a property decorated with BindNever would invalidate the model state, but it doesn't work like that, it just ignores the parameter. The System.ComponentModel.DataAnnotations namespace doesn't have an equivalent to BindNeverAttribute.

There is also the issue of the many many many points where the behavior of MVC can be changed and customized. There are a lot of out of the box classes that come with ASP.Net MVC; it only makes sense to use Attribute classes from the same package.

What about the inconsistency? Why does a [BindRequired] attribute not matter when populating a model from the body of the request? Frankly, I don't know. I believe it is because in that case the entire model class has been "bound", its individual properties being "populated" instead.

Hope that helps out a little.

This is part of the .NET Core and VS Code series, which contains:
Since the release of .NET Core 1.1, a lot has changed, so I am trying to keep this up to date as much as possible, but there still might be some leftover obsolete info from earlier versions.

I am continuing my series about .NET Core, using Visual Studio Code only, on Windows, with as little command line work as possible. Read about writing a console application and then a very simple web application before you read this part.

Setup


In this post I will attempt to write a very simple Web API using .NET Core. I intend to have some sort of authentication and then be able to read and write stuff using REST API calls. As with the projects before, I will use Visual Studio Code to create a folder, open it, then do stuff in it after running the command 'dotnet new' and preferably changing the namespace so that it is not always ConsoleApplication. This time the steps will be a little bit different, more inline with what you would do in real life:
  1. Create a folder for our project called WebCore
  2. In it create an API folder
  3. Open a command prompt in the API folder and type 'dotnet new web' (in .NET Core 1.0 you could omit the template, but from 1.1 you need to use it explicitly)
  4. Open Visual Studio Code by typing 'code' or by any other means
  5. Close the command prompt window
  6. In Code select the API folder and open Program.cs
  7. To the warning "Required assets to build and debug are missing from your project. Add them?" click Yes
  8. To the info "There are unresolved dependencies from '<your project>.csproj'. Please execute the restore command to continue." click Restore
  9. Change the namespace to WebCore.API
  10. Press Ctrl-Shift-B to build the project.

Right now you should have a web project that compiles. In order to turn this into an API project, we need to understand a few things. So go down to the tutorial for ASP.Net Core API that uses Visual Studio to read about what Microsoft suggests, download the zip file of all the docs and tutorials and look in /Docs-master/aspnet/tutorials/first-web-api/sample/src/TodoApi to see what they did in code (or just browse them on GitHub). You also need to understand that while in normal .NET (how do we call it now?) MVC and Web API are two branches that often have similar functionality in similarly named classes, in .NET Core the API is just part of MVC. So no WebApi namespaces, just MVC.

That is why, to turn our app into an API project we need to add to the dependencies of project.json the MVC packages:
,"Microsoft.AspNetCore.Mvc": "1.0.0"
,"Microsoft.AspNetCore.Mvc.Core": "1.0.0"
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.1" />
to ItemGroup in API.csproj and tell the project to use MVC in code. This is done by adding services.AddMvc(); in ConfigureServices and replacing the app.Run line in Configure with app.UseMvcWithDefaultRoute(); in Program.cs.

At this point we should have something like this:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.Mvc": "1.0.0",
"Microsoft.AspNetCore.Mvc.Core": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


API.csproj
<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>netcoreapp1.1</TargetFramework>
</PropertyGroup>

<ItemGroup>
<Folder Include="wwwroot\" />
</ItemGroup>

<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.1" />
</ItemGroup>

</Project>

Program.cs
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;

namespace WebCore.API
{
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace WebCore.API
{
public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole();

if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.UseMvcWithDefaultRoute();

/*app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});*/
}
}
}

Clean and name the launch settings as well:
.vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Core Launch (web API)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceRoot}/bin/Debug/netcoreapp1.0/API.dll",
"args": [],
"cwd": "${workspaceRoot}",
"externalConsole": false,
"stopAtEntry": false
}
]
}


The Controller


Let's add functionality to our API. Create a folder called Controllers and in it add a NoteController.cs file. Just copy paste the code for now, we will understand it later.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

namespace WebCore.API.Controllers
{
[Route("api/[controller]")]
public class NoteController : Controller
{
private static List<Note> _notes;
static NoteController()
{
_notes = new List<Note>();
}


[HttpGet]
public IEnumerable<Note> GetAll()
{
return _notes.AsReadOnly();
}

[HttpGet("{id}", Name = "GetNote")]
public IActionResult GetById(string id)
{
var item = _notes.Find(n => n.Id == id);
if (item == null)
{
return NotFound();
}
return new ObjectResult(item);
}

[HttpPost]
public IActionResult Create([FromBody] Note item)
{
if (item == null)
{
return BadRequest();
}
item.Id = (_notes.Count + 1).ToString();
_notes.Add(item);
return CreatedAtRoute("GetNote", new { controller = "Note", id = item.Id }, item);
}

[HttpDelete("{id}")]
public void Delete(string id)
{
_notes.RemoveAll(n => n.Id == id);
}

public class Note
{
public string Id { get; set; }
public string Content { get; set; }
}
}
}

In this new class we have a lot of interesting things to behold. First of all there is the Route attribute that uses a [controller] token (well explained here). Then there are the Http attributes: HttpGet, HttpPost, HttpPut, HttpDelete which as their name implies define methods on the API controller that are accessed via the respective HTTP methods so beloved by the REST crowd.

An important thing to grasp from this is what happens with the HttpGet attribute (note it has a Name defined) and the later used CreatedAtRoute, which are actually Web API v2 concepts. In our case, whenever we create a new item we return the item as a result, but also set the Location header to the route that would "get" it, so that the client knows how to get to it.

Testing the result


Let's test what we did. While the GET methods are easy to test from the browser (http://localhost:5000/api/note should show an empty array result []), the POST and DELETE are trickier. The Microsoft documentation recommends using Fiddler, but I would go with the easy to install as a Chrome application Postman.

In Postman, don't forget to set the Content-Type header to application/json:


Then POST to /api/note/ a content like
{
Content:'test'
}
:


The result of the call should be as explained previously
{
"id": "1",
"content": "test"
}

Go with the browser at /api/note/ or /api/note/1 and you should get results

More complexity


What happened there exactly? We just added a class and it magically worked. How did .Net Core to do that? Core comes with Dependency Injection and Inversion of Control by default. In our case, just because we have a Controller class with a Route attribute made the difference. The line
app.UseMvcWithDefaultRoute();
is the code that enables that behavior. Let me demonstrate more of this "magic". In our example so far I've hardcoded the Note class inside the Controller class, but usually the class would be part of the functionality of the API and there would be a repository handling saving stuff in the database.

To avoid polluting the post with a lot of code, here is the improved source code of the API. Note is now a more complex object, the controller is initialized with a INoteRepository instance which is bound to a NoteRepository implementation. The API works just as before, only now Note has Key, Subject and Body properties instead of Id and Content.

The thing to learn from this version is how in the Startup class, in ConfigureServices the line
services.AddSingleton();
we tell Core that for a request of the implementation for INoteRepository, return the singleton instance of NoteRepository. From this line alone Core knows to initialize the NoteController constructor with the correct repository instance.

Security


Now, we don't want any peasant to come and mess with our important notes. We require some way of authenticating our operations. There is a very detailed section about Security in the Core documentation, but I only wanted a simple example. I worked on it for hours just to notice that line instructing the application builder to use authentication was after the one telling it to use MVC. Also some naming bugs that made me waste a lot of time.

Bottom line: here is the sample code with authentication. You cannot use any API calls unless you are a User. You login as a user calling /api/note/login/12345. The Create/Delete APIs are also unavailable to the Users, you need to be an Admin. There is a commented line in the code that also adds the Claim to the Admin role when logging in. I also added logging, so you can see what goes wrong in the Debug Console.

More information about how to implement authentication without Identity (as most Microsoft samples want) can be found at Using Cookie Middleware without ASP.NET Core Identity and my example is based on that.

Impressions on Visual Studio Code


Update: some of the complaints underneath are not correct anymore, but I am keeping my original reaction unchanged, since it reflects my feelings at the time.

The more complex the project becomes, the less Visual Studio Code can handle it. Intellisense is bloody awful, the exceptions can sometimes be caught only by adding logging to the console, the basic editing features are lacking - like a serious Undo queue or auto formatting when typing or closing brackets or even syntax highlighting at times and there are some features that plainly don't work, for example when you are writing a new class and an option for "Generate Type" appears, but it does nothing. Same for "Generate Variable". While I plan to test VS Code with a large project created in Visual Studio proper, I don't have high hopes. Remember when Microsoft created Silverlight with Javascript and it sucked and they immediately switched to WPF based Silverlight? It feels like that. However, for nostalgic reasons mostly, I also enjoy working in this spartan environment, especially when working with recently released features. It reminds me of the days when I was trying to connect various Linux software with basically duct tape and paper clips.

This is part of the .NET Core and VS Code series, which contains:
Since the release of .NET Core 1.1, a lot has changed, so I am trying to keep this up to date as much as possible, but there still might be some leftover obsolete info from earlier versions.

Continuing the post about building a console application in .Net Core with the free Visual Studio Code tool on Windows, I will be looking at ASP.Net, also using Visual Studio Code. Some of the things required to understand this are in the previous post, so do read that one, at least for the part with installing Visual Studio Code and .NET Core.

Reading into it


Start with reading the introduction to ASP.Net Core, take a look at the ASP.Net Core Github repository and then continue with the information on the Microsoft sites, like the Getting Started section. Much more effort went into the documentation for ASP.Net Core, which shows - to my chagrin - that the web is a primary focus for the .Net team, unlike the language itself, native console apps, WPF and so on. Or maybe they're just working on it.

Hello World - web style


Let's get right into it. Following the Getting Started section, I will display the steps required to create a working web Hello World application, then we continue with details of implementation for common scenarios. Surprisingly, creating a web app in .Net Core is very similar to doing a console app; in fact we could just continue with the console program we wrote in the first post and it would work just fine, even without a 'web' section in launch.json.
  1. Go to the Explorer icon and click on Open Folder (or FileOpen Folder)
  2. Create a Code Hello World folder
  3. Right click under the folder in Explorer and choose Open in Command Prompt - if you have it. Some Windows versions removed this option from their context menu. If not, select Open in New Window, go to the File menu and select Open Command Prompt (as Administrator, if you can) from there
  4. Write 'dotnet new console' in the console window, then close the command prompt and the Windows Explorer window, if you needed it
  5. Select the newly created Code Hello World folder
  6. From the open folder, open Program.cs
  7. To the warning "Required assets to build and debug are missing from your project. Add them?" click Yes
  8. To the info "There are unresolved dependencies from '<your project>.csproj'. Please execute the restore command to continue." click Restore
  9. Click the Debug icon and press the green play arrow

Wait! Aren't these the steps for a console app? Yes they are. To turn this into a web application, we have to go through these few more steps:
  1. Open project.json and add
    ,"Microsoft.AspNetCore.Server.Kestrel": "1.0.0"
    to dependencies (right after Microsoft.NETCore.App)
  2. Open the .csproj file and add
    <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
    </ItemGroup>
  3. Open Program.cs and change it by adding
    using Microsoft.AspNetCore.Hosting;
    and replacing the Console.WriteLine thing with
    var host = new WebHostBuilder()
    .UseKestrel()
    .UseStartup<Startup>()
    .Build();
    host.Run();
  4. Create a new file called Startup.cs that looks like this:
    using System;
    using Microsoft.AspNetCore.Builder;
    using Microsoft.AspNetCore.Hosting;
    using Microsoft.AspNetCore.Http;

    namespace <your namespace here>
    {
    public class Startup
    {
    public void Configure(IApplicationBuilder app)
    {
    app.Run(context =>
    {
    return context.Response.WriteAsync("Hello from ASP.NET Core!");
    });
    }
    }
    }

There are a lot of warnings and errors displayed, but the program compiles and when run it keeps running until stopped. To see the result open a browser window on http://localhost:5000. Closing and reopening the folder will make the warnings go away.

#WhatHaveWeDone


So first we added Microsoft.AspNetCore.Server.Kestrel to the project. We added the Microsoft.AspNetCore namespace to the project. With this we have now access to the Microsoft.AspNetCore.Hosting which allows us to use a fluent interface to build a web host. We UseKestrel (Kestrel is based on libuv, which is a multi-platform support library with a focus on asynchronous I/O. It was primarily developed for use by Node.js, but it's also used by Luvit, Julia, pyuv, and others.) and we UseStartup with a class creatively named Startup. In that Startup class we receive an IApplicationBuilder in the Configure method, to which we attach a simple handler on Run.

In order to understand the delegate sent to the application builder we need to go through the concept of Middleware, which are components in a pipeline. In our case Run was used, because as a convention Run is the last thing to be executed in the pipeline. We could have used just as well Use without invoking the next component. Another option is to use Map, which is also a convention exposed through an extension method like Run, and which is meant to branch the pipeline.

Anyway, all of this you can read in the documentation. A must read is the Fundamentals section.

Various useful things


Let's ponder on what we would like in a real life web site. Obviously we need a web server that responds differently for different URLs, but we also need logging, security, static files. Boilerplate such as errors in the pages and not found pages need to be handled. So let's see how we can do this. Some tutorials are available on how to do that with Visual Studio, but in this post we will be doing this with Code! Instead of fully creating a web site, though, I will be adding to the basic Hello World above one or two features at a time, postponing building a fully functioning source for another blog post.

Logging


We will always need a way of knowing what our application is doing and for this we will implement a more complete Configure method in our Startup class. Instead of only accepting an IApplicationBuilder, we will also be asking for an ILoggerFactory. The main package needed for logging is
"Microsoft.Extensions.Logging": "1.0.0"
, which needs to be added to dependencies in project.json.
From .NET Core 1.1, Logging is included in the Microsoft.AspNetCore package.

Here is some code:

Startup.cs
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Logging;

namespace ConsoleApplication
{
public class Startup
{
public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory)
{
var logger=loggerFactory.CreateLogger("Sample app logger");
logger.LogInformation("Starting app");
}
}
}

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.Extensions.Logging": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


So far so good, but where does the logger log? For that matter, why do I need a logger factory, can't I just instantiate my own logger and use it? The logging mechanism intended by the developers is this: you get the logger factory, you add logger providers to it, which in turn instantiate logger instances. So when the logger factory creates a logger, it actually gives you a chain of different loggers, each with their own settings.

For example, in order to log to the console, you run loggerFactory.AddConsole(); for which you need to add the package
"Microsoft.Extensions.Logging.Console": "1.0.0"
to project.json. What AddConsole does is actually factory.AddProvider(new ConsoleLoggerProvider(...));. The provider will then instantiate a ConsoleLogger for the logger chain, which will write to the console.

Additional logging options come with the Debug or EventLog or EventSource packages as you can see at the GitHub repository for Logging. Find a good tutorial on how to create your own Logger here.

Static files


It wouldn't be much of a web server if it wouldn't serve static files. ASP.Net Core has two concepts for web site roots: Web root and Content root. Web is for web-servable content files, while Content is for application content files, like views and stuff. In order to serve these we use the
<PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
package, we instruct the WebHostBuilder what the web root is, then we tell the ApplicationBuilder to use static files. Like this:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.StaticFiles": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


.csproj file
<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp1.1</TargetFramework>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" />
<PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.1.1" />
</ItemGroup>

</Project>

Program.cs
using System.IO;
using Microsoft.AspNetCore.Hosting;

namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
var path=Path.Combine(Directory.GetCurrentDirectory(),"www");
var host = new WebHostBuilder()
.UseKestrel()
.UseWebRoot(path)
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using Microsoft.AspNetCore.Builder;

namespace ConsoleApplication
{
public class Startup
{
public void Configure(IApplicationBuilder app)
{
app.UseStaticFiles();
}
}
}

Now all we have to do is add a page and some files in the www folder of our app and we have a web server. But what does UseStaticFiles do? It runs app.UseMiddleware(), which adds StaticFilesMiddleware to the middleware chain which, after some validations and checks, runs StaticFileContext.SendRangeAsync(). It is worth looking up the code of these files, since it both demystifies the magic of web servers and awes through simplicity.

Routing


Surely we need to serve static content, but what about dynamic content? Here is where routing comes into place.

First add package
"Microsoft.AspNetCore.Routing": "1.0.0"
to project.json, then add some silly routing to Startup.cs:

project.json
{
"version": "1.0.0-*",
"buildOptions": {
"debugType": "portable",
"emitEntryPoint": true
},
"dependencies": {},
"frameworks": {
"netcoreapp1.0": {
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.0.0"
},
"Microsoft.AspNetCore.Server.Kestrel": "1.0.0",
"Microsoft.AspNetCore.Routing": "1.0.0"
},
"imports": "dnxcore50"
}
}
}


From .NET Core 1.1, Routing is found in the Microsoft.AspNetCore package.

Program.cs
using Microsoft.AspNetCore.Hosting;

namespace ConsoleApplication
{
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseStartup<Startup>()
.Build();

host.Run();
}
}
}

Startup.cs
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app)
{
app.UseRouter(new HelloRouter());
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hello world!");
};
}
return Task.FromResult(0);
}
}
}
}

Some interesting things added here. First of all, we added a new method to the Startup class, called ConfigureServices, which receives an IServicesCollection. To this, we AddRouting, with a familiar by now extension method that shortcuts to adding a whole bunch of services to the collection: contraint resolvers, url encoders, route builders, routing marker services and so on. Then in Configure we UseRouter (shortcut for using a RouterMiddleware) with a custom built implementation of IRouter. What it does is look for a /hello URL request and returns the "Hello World!" string. Go on and test it by going to http://localhost:5000/hello.

Take a look at the documentation for routing for a proper example, specifically using a RouteBuilder to build an implementation of IRouter from a default handler and a list of mapped routes with their handlers and how to map routes, with parameters and constraints. Another post will handle a complete web site, but for now, let's just build a router with several routes, see how it goes:

Startup.cs
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app)
{
var defaultHandler = new RouteHandler(
c => c.Response.WriteAsync($"Default handler! Route values: {string.Join(", ", c.GetRouteData().Values)}")
);

var routeBuilder = new RouteBuilder(app, defaultHandler);

routeBuilder.Routes.Add(new Route(new HelloRouter(), "hello/{name:alpha?}",
app.ApplicationServices.GetService<IInlineConstraintResolver>()));

routeBuilder.MapRoute("Track Package Route",
"package/{operation:regex(track|create|detonate)}/{id:int}");

var router = routeBuilder.Build();
app.UseRouter(router);
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var name = context.RouteData.Values["name"] as string;
if (String.IsNullOrEmpty(name)) name="World";

var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hi, {name}!");
};
}
return Task.FromResult(0);
}
}
}
}

Run it and test the following URLs: http://localhost:5000/ - should return nothing but a 404 code (which you can check in the Network section of your browser's developer tools), http://localhost:5000/hello - should display "Hi, World!", http://localhost:5000/hello/Siderite - should display "Hi, Siderite!", http://localhost:5000/package/track/12 - should display "Default handler! Route values: [operation, track], [id, 12]", http://localhost:5000/track/abc - should again return a 404 code, since there is a constraint on id to be an integer.

How does that work? First of all we added a more complex HelloRouter implementation that handles a name. To the route builder we added a default handler that just displays the parameters receives, a Route instance that contains the URL template, the IRouter implementation and an inline constraint resolver for the parameter and finally we added just a mapped route that goes to the default router. That is why if no hello or package URL template are matched the site returns 404 and otherwise it chooses one handler or another and displays the result.

Error handling


Speaking of 404 code that can only be seen in the browser's developer tools, how do we display an error page? As usual, read the documentation to understand things fully, but for now we will be playing around with the routing example above and adding error handling to it.

The short answer is add some middleware to the mix. Extensions methods in
"Microsoft.AspNetCore.Diagnostics": "1.0.0"

like UseStatusCodePages, UseStatusCodePagesWithRedirects, UseStatusCodePagesWithReExecute, UseDeveloperExceptionPage, UseExceptionHandler.

Here is an example of the Startup class:
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyInjection;

namespace ConsoleApplication
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddRouting();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseDeveloperExceptionPage();
app.UseStatusCodePages(/*context=>{
throw new Exception($"Page return status code: {context.HttpContext.Response.StatusCode}");
}*/);

var defaultHandler = new RouteHandler(
c => c.Response.WriteAsync($"Default handler! Route values: {string.Join(", ", c.GetRouteData().Values)}")
);

var routeBuilder = new RouteBuilder(app, defaultHandler);

routeBuilder.Routes.Add(new Route(new HelloRouter(), "hello/{name:alpha?}",
app.ApplicationServices.GetService<IInlineConstraintResolver>()));

routeBuilder.MapRoute("Track Package Route",
"package/{operation:regex(track|create|detonate)}/{id:int}");

var router = routeBuilder.Build();
app.UseRouter(router);
}

public class HelloRouter : IRouter
{
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
return null;
}

public Task RouteAsync(RouteContext context)
{
var name = context.RouteData.Values["name"] as string;
if (String.IsNullOrEmpty(name)) name = "World";
if (String.Equals(name, "error", StringComparison.OrdinalIgnoreCase))
{
throw new ArgumentException("Hate errors! Won't say Hi!");
}

var requestPath = context.HttpContext.Request.Path;
if (requestPath.StartsWithSegments("/hello", StringComparison.OrdinalIgnoreCase))
{
context.Handler = async c =>
{
await c.Response.WriteAsync($"Hi, {name}!");
};
}
return Task.FromResult(0);
}
}
}
}

All you have to do now is try some non existent route like http://localhost:5000/help or cause an error with http://localhost:5000/hello/error

Final thoughts


The post is already too long so I won't be covering here a lot of what is needed such as security or setting the environment name so one can, for example, only show the development error page for the development environment. The VS Code ecosystem is at its beginning and work is being done to improve it as we speak. The basic concept of middleware allows us to build whatever we want, starting from scratch. In a future post I will discuss MVC and Web API and the more advanced concepts related to the web. Perhaps it will be done in VS Code as well, perhaps not, but my point is that with the knowledge so far one can control all of these things by knowing how to handle a few classes in a list of middleware. You might not need an off the shelf framework, you may write your own.

Find a melange of the code above in this GitHub repository.

If you read an Angular book or read a howto, you will think that Angular is the greatest discovery since fire. Everything is neatly stacked in modules, controllers, templates, directives, factories, etc. The problem comes when you want to use some code of your own, using simple Javascript that does specific work, and then you want to link it nicely with AngularJS. It is not always easy. My example concerns the simple display of a dialog which edits an object. I want it to work on every page, so I added it to the general layout template. The layout does not have a controller. Even if I add it, the dialog engine I have been using was buggy and I've decided to just use jQuery.dialog.

So here is my conundrum: How to load the content of a dialog from an Angular template, display it with jQuery.dialog, load the information with jQuery.get, then bind its input elements to an Angular scope object. I've tried the obvious: just load the template in the dialog and expect Angular to notice a new DOM element was added and parse it and work its magic. It didn't work. Why can't I just call an angular.refresh(elem); function and get it over with, I thought. There are several other solutions. One is to not create the content dynamically at all, just add it to the layout, mark it with ng-controller="something" and then, in the controller, save the object you are interested in or the scope as some sort of globally accessible object that you instantiate from jQuery.get. The dialog would just move the element around, afterwards. That means you need to create a controller, maybe in another file, to be nice, then load it into your page. Another is to create some sort of directive or script tag that loads the Angular template dynamically and to hope it works.

Long story short, none of these solutions appealed to me. I wanted a simple refresh(elem) function. And there is one. It is called angular.injector. You call it with the names of the modules you need to load ('ng' one of them and usually the main application module the second). The result is a function that can use invoke to get the same results as a controller constructor. And that is saying something: if you can do the work that the controller does in your block of code, you don't need a zillion controllers making your life miserable, nor do you need to mark the HTML uselessly for very simple functionality.

Without further ado, here is a function that takes as parameters an element and a data object. The function will force angular to compile said element like it was part of the angular main application, then bind to the main scope the properties of the data object:

function angularCompile(elem, data) {
// create an injector
var $injector = angular.injector(['ng','app']);

// use the type inference to auto inject arguments, or use implicit injection
$injector.invoke(function($rootScope, $compile, $document){
var compiled = $compile(elem || $document);
compiled($rootScope);
if (data) {
for (var k in data) {
if (data.hasOwnProperty(k)) {
$rootScope[k]=data[k];
}
}
}
$rootScope.$digest();
});
}


Example usage:

angularCompile(dialog[0],{editedObject: obj}); // will take the jQuery dialog element, compile it, and add to the scope the editedObject property with the value of obj.


Full code:

OpenTranslationDialog=function(Rule, onProceed, onCancel) {
jQuery.ajax({
type: 'GET',
url: '/Content/ng-templates/dialogs/Translation.html',
data: Rule,
success: function(data) {
var dialog=jQuery('<div></div>')
.html(data)
.dialog({
resizable:true,
width:700,
modal:true,
buttons: {
"Save": function() {
var url='/some/api/url';
jQuery.ajax({
type:'PUT',
url:url,
data:Rule,
success:function() {
if (onProceed) onProceed();
$(this).dialog( "close" );
},
error:function() {
alert('There was an error saving the rule');
}
});
},
Cancel: function() {
if (onCancel) onCancel();
$(this).dialog( "close" );
}
}
});

angularCompile(dialog[0],{Rule:Rule});
},
error:function() {
alert('There was an error getting the dialog template');
}
});
}


Before you take my word on it, though, beware: I am an Angular noob and my desire here was to hack away at it in order to merge my own code with the nice structured code of my colleagues, who now hate me. Although they liked angular.injector when I showed it to them :)

I have been hearing about the AngularJS library for a few months now, people often praising it as the new paradigm of web development. It is basically a JavaScript MVC framework that makes heavy use of markup language in order to declare the desired behaviour. Invented at Google by Miško Hevery, it uses cacheable templates, databinding and dependency injection to combine the various components that otherwise are independent and testable. It also comes with its own testing framework (unit and end-to-end) and a way to describe unit tests Jasmine (BDD)style.

So I started reading about this new framework in the book intuitively called AngularJS, written by Brad Green and Shyam Seshadri. They start with an anecdote, discussing how they were working on a web application at Google. They have already written 17000 lines of code in about 6 months and it was almost finished, albeit with great frustration related to development speed and testability. This guy, Miško Hevery, tells everyone that by using a framework that he wrote in his spare time (you gotta love devs!) they could rewrite the whole application in two weeks. He was wrong, they did it in three weeks and at the end the whole thing has only 1500 lines of code and was fully testable. This was a great beginning for the book, as it starts with a promise and then (sorry, couldn't help the pun - you will see what I mean if you read the book or know AngularJS already) it describes how to achieve your goals. The book itself is not large, about 160 PDF pages, and can be used as both a primer and a reference. It describes the basic concepts of AngularJS and how they can be put to work, with some small app examples at the end. Of course, you have a link to where to download all their code samples.

What do I think about the book? It was pretty good. It shows the authors' preference towards Linux setups, but it is not annoying. Each chapter is clear and to the point. The framework itself, though, is original enough that after a few chapters it is almost impossible to understand everything without tinkering with the code yourself. Unfortunately I didn't have the time and disposition to do that, so just because I've read the book doesn't mean I know how to work with Angular, but I am confident that when I will actually start working with it, it will all come together in my mind. Also, as I was saying, the book can easily be used as a reference. It is not a complete overview, not every AngularJS feature and gotcha can be found in its pages, but it's good enough.

What do I think about the framework? It seems pretty spectacular. My only experience with JavaScript MVC frameworks is from a short brush off with BackboneJS. At a time I thought I would be working with it a lot and was boasting here that interesting posts would appear. Alas, it was not to be. Sorry about that, maybe better luck with Angular. Backbone was pretty interesting, but it had a horrendous way of working with data models and it was very easy to break something and not realize where it came from. There seems to be a lot more thought put into Angular. An interesting point is that the writers advertise TDD as a way of actually working and claim they do so themselves. I have seen many people trying and giving up, but I have hopes for JavaScript. You don't need to compile things, you don't need complicated servers or time consuming deployment steps: just change stuff and run the tests and/or refresh a page. I like the fact that the creators of AngularJS put this much work into making everything testable.

So go ahead: read the book and try the framework!

Update 24 Aug 2013: I've started reading dev blogs again and I've stumbled upon a 70 minute video by Dan Wahlin presenting AngularJS. His explanations seemed a lot more down to Earth than those in the book so I felt that his video really complements rather well what is written there. Here it is:

ASP.NET MVC 4 and the Web API, by Jamie Kurtz, is the one of the new breed of technical books that read like a blog entry, albeit a very long one. The book is merely 100 pages long, but to the point, with links to code on GitHub and references to other resources for details that are not the subject of the book. The principles behind the architecture are discussed, explained, the machine setup is described, the configuration, then bam! all the pieces fit together. Even if I don't agree fully with some of Kurtz's recommendations, I have to admit this is probably a very very useful book.

What is it about? It describes how to create a REST web API, complete with authentication, authorization, logging and unit testing. It discusses ORM (with OData), DI, Source control, the basics of REST and MVC, and all other tools required. But what I believe to be the strength of the approach in the book is the clear separation of modules. One can easily find fault with one of the pieces recommended by the author and just as easily replace only that component, leaving the others as is.

The structure of the book is as follows:

  • Chapter 1 - A quick introduction of ASP.Net MVC4 as a platform for REST services, via the Web API.
  • Chapter 2 - The basics of REST services. There are very subtle points described there, including the correct HTTP codes and headers in the response and discoverability. It also points to prerequisites of your API in order to be called REST, like the REST Maturity Model.
  • Chapter 3 - Modelling of an API. This includes the way URLs are formed, the conventions in use and how the API should look to the client.
  • Chapter 4 - The scaffolding of your Visual Studio project, the logging configuration, the folder structure, the API DTOs.
  • Chapter 5 - Putting components together: configuring NInject, designing your classes with DI and testability in mind.
  • Chapter 6 - Security: really simple implementation with a lot of power provided by the default Microsoft Membership Providers.
  • Chapter 7 - Actually building the API, making some smoke tests, seeing it all work.


The complete source of the project described in the book can be found on GitHub.

My personal opinion of the setup is that, while all seems to fit together, some technologies are a bit over the top. NInject, I had personal experience with it, is very good, but very slow. The ASP.Net Membership scheme is very verbose. While I wouldn't really care about it as implemented in the book, I still cringe at the table names and zillions of columns. Also, I am slightly opposed to ORMs, mostly because they attempt to mould you into a specific frame of thinking, that of CRUD, making any optimization or deviation from the plan rather difficult. I've had the experience of working on a project that had all of its database access in stored procedures. To find what accessed a table and a column was a breeze, without knowing anything about the underlying implementation. But even so, as I was saying above, the fact that the author separates concerns so beautifully makes any component replaceable.

I highly recommend this book, especially now, when the world moves toward HTML and Javascript interfaces built on web APIs.